title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen3-235B Q6_K at 56t/s prefill 4.5t/s decode on Xeon 3175X (384GB DDR4-3400) and RTX 4090 with ktransformers
| 1 |
[removed]
| 2025-05-07T10:35:18 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgu3gx
| false | null |
t3_1kgu3gx
|
/r/LocalLLaMA/comments/1kgu3gx/qwen3235b_q6_k_at_56ts_prefill_45ts_decode_on/
| false | false |
default
| 1 | null |
||
Qwen3-235B Q6_K at 56t/s prefill 4.5t/s decode on Xeon 3175X (384GB DDR4-3400) and RTX 4090 with ktransformers
| 1 |
[removed]
| 2025-05-07T10:36:08 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgu3xi
| false | null |
t3_1kgu3xi
|
/r/LocalLLaMA/comments/1kgu3xi/qwen3235b_q6_k_at_56ts_prefill_45ts_decode_on/
| false | false |
default
| 1 | null |
||
Qwen3-235B Q6_K at 56t/s prefill 4.5t/s decode on Xeon 3175X (384GB DDR4-3400) and RTX 4090 with ktransformers
| 1 |
[removed]
| 2025-05-07T10:36:46 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgu49d
| false | null |
t3_1kgu49d
|
/r/LocalLLaMA/comments/1kgu49d/qwen3235b_q6_k_at_56ts_prefill_45ts_decode_on/
| false | false |
default
| 1 | null |
||
Qwen3-235B Q6_K ktransformers at 56t/s prefill 4.5t/s decode on Xeon 3175X (384GB DDR4-3400) and RTX 4090
| 83 | 2025-05-07T10:37:38 |
Arli_AI
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgu4qg
| false | null |
t3_1kgu4qg
|
/r/LocalLLaMA/comments/1kgu4qg/qwen3235b_q6_k_ktransformers_at_56ts_prefill_45ts/
| false | false | 83 |
{'enabled': True, 'images': [{'id': 'Yxf-PELR0VkYRwSG9rKvETEBq4sDInhtTEm1iGMOhio', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=108&crop=smart&auto=webp&s=7a15e7a9305205bb29ca3a942ab1d69641fd6547', 'width': 108}, {'height': 238, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=216&crop=smart&auto=webp&s=ffda3504032a213a105d5a50c643289539201da9', 'width': 216}, {'height': 353, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=320&crop=smart&auto=webp&s=2d5e24e1611efa51a3107f0cd657cfee7a7638ed', 'width': 320}, {'height': 706, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=640&crop=smart&auto=webp&s=42da044e5a349add8ff7c3733ec1c4e706676161', 'width': 640}, {'height': 1059, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=960&crop=smart&auto=webp&s=8e2dd22fcc8ac9c17e342c650e4e1fd7314825e3', 'width': 960}, {'height': 1191, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?width=1080&crop=smart&auto=webp&s=7f17fcfe9b4e86333e3a79f1bf4ed897ca9b62fa', 'width': 1080}], 'source': {'height': 1791, 'url': 'https://preview.redd.it/1ijx9ffv8cze1.png?auto=webp&s=0c2aa29dd73dd78dcd8e6525a1596aae94201ae6', 'width': 1623}, 'variants': {}}]}
|
|||
How can I ask model to use the data I gave?
| 1 |
[removed]
| 2025-05-07T10:37:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgu4qy/how_can_i_ask_model_to_use_the_data_i_gave/
|
Particular_Buy5429
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgu4qy
| false | null |
t3_1kgu4qy
|
/r/LocalLLaMA/comments/1kgu4qy/how_can_i_ask_model_to_use_the_data_i_gave/
| false | false |
self
| 1 | null |
Why does Reddit Answers answer this for Christians and Muslims but not for Jews?
| 1 |
[removed]
| 2025-05-07T10:50:13 |
https://www.reddit.com/gallery/1kgubou
|
itszadder
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgubou
| false | null |
t3_1kgubou
|
/r/LocalLLaMA/comments/1kgubou/why_does_reddit_answers_answer_this_for/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=108&crop=smart&auto=webp&s=afbfd0d59bc6f2172cde443902701cd9c23c3586', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=216&crop=smart&auto=webp&s=a12746fb2a3c02890419ba7663d858ebc3d6c30b', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=320&crop=smart&auto=webp&s=e39eba9d9f68a7f496a98b34534a6271c288450b', 'width': 320}, {'height': 605, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=640&crop=smart&auto=webp&s=ed77e40fe90b27762ebd984b86d7eb4af23a98f9', 'width': 640}, {'height': 908, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=960&crop=smart&auto=webp&s=ee49a68d42514aef88b594ba06e6d04d3c238960', 'width': 960}, {'height': 1022, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?width=1080&crop=smart&auto=webp&s=08043254f0dbfc9638d82433ced3b11e15df4434', 'width': 1080}], 'source': {'height': 1244, 'url': 'https://external-preview.redd.it/4V6HORokDyVLJ34kUB_WE4EGDfWlx9Zhl-b0SGS-dSk.png?auto=webp&s=ce116e5c26b2cfa4b3b84b4784284d8d89a14d79', 'width': 1314}, 'variants': {}}]}
|
|
Looking for a software that lets me mask an api key and hosts a open ai compatible api.
| 6 |
Hey I am a researcher at an University we do have open ai and mistral api keys but we are of course not allowed to hand them out to students. However it would be really good to give them some accesse. Before I try writing my own open ai compatible api. I wanted to ask is there a project like this ?
Where i can host an api with the backend being my own api key and I can create accounts and proxy api keys that students can use ?
| 2025-05-07T11:01:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kguiy2/looking_for_a_software_that_lets_me_mask_an_api/
|
Noxusequal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kguiy2
| false | null |
t3_1kguiy2
|
/r/LocalLLaMA/comments/1kguiy2/looking_for_a_software_that_lets_me_mask_an_api/
| false | false |
self
| 6 | null |
I passed a Japanese corporate certification using a local LLM I built myself
| 1 |
[removed]
| 2025-05-07T11:02:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kguj4v/i_passed_a_japanese_corporate_certification_using/
|
IntelligentHope9866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kguj4v
| false | null |
t3_1kguj4v
|
/r/LocalLLaMA/comments/1kguj4v/i_passed_a_japanese_corporate_certification_using/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '0YHynagu4xibwFLFXVvM0PtEZ6tvsWJUSiZr8gJo3IQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=108&crop=smart&auto=webp&s=70fc7adfbae217f94d29d6553e89d9b263841fd4', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=216&crop=smart&auto=webp&s=62e1d5d4ec882244c6a60ff84cd9b8b80637463d', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=320&crop=smart&auto=webp&s=99ab8c0407aabf1680bf11240e021a75be22b9a9', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=640&crop=smart&auto=webp&s=113b4f6d97a071018d426059188b24d4d0078a31', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=960&crop=smart&auto=webp&s=a130243289290e0e118de0df2575f64fd4ee7c79', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?width=1080&crop=smart&auto=webp&s=f3856329d1752ea17b5aee7475dcae8a3c9b4f7b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/9bU2sG6ZV0ZpTPiFjqOPLRU_x6dc9gZU1JDvCz5dpKE.jpg?auto=webp&s=d758c2c12566b32b80f7a23fed925056ae0609eb', 'width': 1536}, 'variants': {}}]}
|
Gemini 2.5 pro I/O edition
| 1 |
https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance/
| 2025-05-07T11:06:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgulxn/gemini_25_pro_io_edition/
|
EasternBeyond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgulxn
| false | null |
t3_1kgulxn
|
/r/LocalLLaMA/comments/1kgulxn/gemini_25_pro_io_edition/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=108&crop=smart&auto=webp&s=e7d67703fe097e7adf16d0681e17a1f346830c0c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=216&crop=smart&auto=webp&s=1840b1c4b27717e5bb8ea6be419303d347848596', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=320&crop=smart&auto=webp&s=9144ef12f2f2eccabb6fdc12497b1aa05436f99d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=640&crop=smart&auto=webp&s=91c0dc6476bef909bcba0ae03f0598173d0ae555', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=960&crop=smart&auto=webp&s=fe8a3eb96589345ea2b3e5b508d164ee32d8256e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?width=1080&crop=smart&auto=webp&s=1291e5e31d4b00a07d14b2ee9bff91ddb19bf4c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/92zHYOgIvoMYgwUZ8_4V8QdjqCDMgwmaSHilgAE0aMI.jpeg?auto=webp&s=1cfd77e4c9045ac377a2507f096ed236f6068ede', 'width': 1200}, 'variants': {}}]}
|
Which version of LLaMa 4 does www.meta.ai use?
| 1 |
[removed]
| 2025-05-07T11:08:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgun6p/which_version_of_llama_4_does_wwwmetaai_use/
|
beachpandaa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgun6p
| false | null |
t3_1kgun6p
|
/r/LocalLLaMA/comments/1kgun6p/which_version_of_llama_4_does_wwwmetaai_use/
| false | false |
self
| 1 | null |
Gemini 2.5 Pro 05-06 (IO Edition)
| 1 | 2025-05-07T11:10:10 |
EasternBeyond
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kguo4z
| false | null |
t3_1kguo4z
|
/r/LocalLLaMA/comments/1kguo4z/gemini_25_pro_0506_io_edition/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'LaVjMHHHWv_mYiKGa37UY9JZzwg3H2gC00F49FbL5qQ', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?width=108&crop=smart&auto=webp&s=ee3bb020a61476c67e270e3d06a497fb12f466aa', 'width': 108}, {'height': 368, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?width=216&crop=smart&auto=webp&s=05959226382c473fa5b349501ffb6af9f75f7e15', 'width': 216}, {'height': 546, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?width=320&crop=smart&auto=webp&s=b9643fe84c95acd93c7ef05d24732b55ca780cc4', 'width': 320}, {'height': 1093, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?width=640&crop=smart&auto=webp&s=d0c75d20802cf21d425f8aec542c10c30d991d15', 'width': 640}, {'height': 1639, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?width=960&crop=smart&auto=webp&s=0e975be29c2ee21f83eac50526507a0633bd77ee', 'width': 960}], 'source': {'height': 1826, 'url': 'https://preview.redd.it/qc3iidfqecze1.jpeg?auto=webp&s=c4f21083eec9e5044a81e9d0afa4244e73ff8dbf', 'width': 1069}, 'variants': {}}]}
|
|||
Gemini 2.5 Pro 05-06 (IO Edition)
| 19 | 2025-05-07T11:12:09 |
https://www.reddit.com/gallery/1kgupcy
|
EasternBeyond
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgupcy
| false | null |
t3_1kgupcy
|
/r/LocalLLaMA/comments/1kgupcy/gemini_25_pro_0506_io_edition/
| false | false | 19 |
{'enabled': True, 'images': [{'id': 'AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?width=108&crop=smart&auto=webp&s=f38c93aeddb3306b3afae0609c246788d55f28f9', 'width': 108}, {'height': 214, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?width=216&crop=smart&auto=webp&s=a30b779309f2ada30826cdfa98c1f2312118848f', 'width': 216}, {'height': 317, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?width=320&crop=smart&auto=webp&s=60c7a404937404c56a0cf1724eae8181a38395b2', 'width': 320}, {'height': 635, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?width=640&crop=smart&auto=webp&s=b24b4187cb1c776988e7af64e34ada19e1c9936b', 'width': 640}, {'height': 953, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?width=960&crop=smart&auto=webp&s=3281f4cd4f944eabec8267607aca302d48ec3d11', 'width': 960}], 'source': {'height': 973, 'url': 'https://external-preview.redd.it/AcvY2iQydAkuLrR9B5tut99az-M6OzZYxQpuRtv_6NM.jpeg?auto=webp&s=ee84178bb0ad8292c0c3a4c0f1a8895d067aa28e', 'width': 980}, 'variants': {}}]}
|
||
Apriel-Nemotron-15b-Thinker - o1mini level with MIT licence (Nvidia & Servicenow)
| 209 |
Service now and Nvidia brings a new 15B thinking model with comparable performance with 32B
Model: [https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker](https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker) (MIT licence)
It looks very promising (resumed by Gemini) :
* **Efficiency:** Claimed to be half the size of some SOTA models (like QWQ-32b, EXAONE-32b) and consumes significantly fewer tokens (\~40% less than QWQ-32b) for comparable tasks, directly impacting VRAM requirements and inference costs for local or self-hosted setups.
* **Reasoning/Enterprise:** Reports strong performance on benchmarks like MBPP, BFCL, Enterprise RAG, IFEval, and Multi-Challenge. The focus on Enterprise RAG is notable for business-specific applications.
* **Coding:** Competitive results on coding tasks like MBPP and HumanEval, important for development workflows.
* **Academic:** Holds competitive scores on academic reasoning benchmarks (AIME, AMC, MATH, GPQA) relative to its parameter count.
* **Multilingual:** We need to test it
| 2025-05-07T11:14:13 |
https://www.reddit.com/gallery/1kguqmd
|
Temporary-Size7310
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kguqmd
| false | null |
t3_1kguqmd
|
/r/LocalLLaMA/comments/1kguqmd/aprielnemotron15bthinker_o1mini_level_with_mit/
| false | false | 209 |
{'enabled': True, 'images': [{'id': 'EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=108&crop=smart&auto=webp&s=46a122e9b176da01f4d5505d7f5b74d4752b9628', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=216&crop=smart&auto=webp&s=75c3b3151c8ded6552973e212b3ecdb59cf4d7bb', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=320&crop=smart&auto=webp&s=3dc49f3304e32dbc797c0cf98994cbd4b77c12bb', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=640&crop=smart&auto=webp&s=0d4cc697d8897122cd61888ad7f02b892afd49c4', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=960&crop=smart&auto=webp&s=a95e3296f4641cf34e35fc003e54013fbb039568', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?width=1080&crop=smart&auto=webp&s=f3fdcaf8f2b6d874bf2755c34b3e122442fc5883', 'width': 1080}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/EuRAXuyDLNOAu2-1ktV_-X31N5aAiqTZTPHiWEhPj-E.png?auto=webp&s=984595f4fda3e3c6d84872f172afe04fe970f7f4', 'width': 1280}, 'variants': {}}]}
|
|
Doing my thesis work on AI security and Trust. Help out if you can
| 1 |
[removed]
| 2025-05-07T11:33:35 |
https://docs.google.com/forms/d/e/1FAIpQLSdNKSnEFwSpteBePwokejm6zpYJ1IwZhL2vzQDhUaffT091yw/viewform?usp=header
|
Big_Teaching4054
|
docs.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgv2to
| false | null |
t3_1kgv2to
|
/r/LocalLLaMA/comments/1kgv2to/doing_my_thesis_work_on_ai_security_and_trust/
| false | false |
default
| 1 | null |
Minimum system requirements
| 1 |
I've been reading a lot about running a local LLM, but I haven't installed anything yet to mess with it. There is a lot of info available on the topic, but very little of it is geared toward noobs. I have the ultimate goal of building an AI box that I can integrate into my Home Assistant setup and replace Google and Alexa for my smart home and AI needs (which are basic search questions and some minor generative requests). How much VRAM would I need for such a system to run decently and make a passable substitute for basic voice recognition and a good interactive experience? Is the speed of the CPU and system RAM important, or are most of the demanding query parameters passed onto the GPUs?
Basically, what gen is CPU would be a minimum requirement for such a system? How much system RAM is needed? How much VRAM? I'm looking at Intel ARC GPUs. Will I have limitations on that architecture? Is mixing GPU brand problematic or pretty straightforward? I don't want to start buying parts to mess around with only to find them unusable in my final build later on. I want to get parts that I can start with now and just add more GPUs to later.
TIA
| 2025-05-07T12:55:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgwnbk/minimum_system_requirements/
|
Universal_Cognition
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgwnbk
| false | null |
t3_1kgwnbk
|
/r/LocalLLaMA/comments/1kgwnbk/minimum_system_requirements/
| false | false |
self
| 1 | null |
Qwen3 thinking toggle could probably have other use cases.
| 13 |
Hey all, just wanted to share a quick experiment I ran with Qwen3 that led to an interesting discovery. So, I fine-tuned the two different modes of Qwen3 on completely separate sets of data. I know it sounds simple, but it worked. The models acted differently depending on which mode was active.
At first, I thought it was a dumb idea since llms use one set of weights, but the results were pretty surprising. Given that Qwen3 has this toggle mode feature, it looks like there's potential for some cool new use cases. Could it be useful for tasks where two contrasting types of reasoning are needed, without having to switch models entirely? It's like having 2 experts within one model.
Anyway, it's not groundbreaking, but it was fun experimenting with it. Curious if anyone has tried something like this or seen any similar results. Would love to hear your thoughts!
noumenon-labs/Eqwenox-0.6B
| 2025-05-07T13:07:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgwwdr/qwen3_thinking_toggle_could_probably_have_other/
|
AccomplishedAir769
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgwwdr
| false | null |
t3_1kgwwdr
|
/r/LocalLLaMA/comments/1kgwwdr/qwen3_thinking_toggle_could_probably_have_other/
| false | false |
self
| 13 | null |
Faster open webui title generation for Qwen3 models
| 16 |
If you use Qwen3 in Open WebUI, by default, WebUI will use Qwen3 for title generation with reasoning turned on, which is really unnecessary for this simple task.
Simply adding "/no\_think" to the end of the title generation prompt can fix the problem.
Even though they ***"hide"*** the title generation prompt for some reason, you can search their GitHub to find all of their default prompts. Here is the title generation one with "/no\_think" added to the end of it:
### Task:
Generate a concise, 3-5 word title with an emoji summarizing the chat history.
### Guidelines:
- The title should clearly represent the main theme or subject of the conversation.
- Use emojis that enhance understanding of the topic, but avoid quotation marks or special formatting.
- Write the title in the chat's primary language; default to English if multilingual.
- Prioritize accuracy over excessive creativity; keep it clear and simple.
### Output:
JSON format: { "title": "your concise title here" }
### Examples:
- { "title": "📉 Stock Market Trends" },
- { "title": "🍪 Perfect Chocolate Chip Recipe" },
- { "title": "Evolution of Music Streaming" },
- { "title": "Remote Work Productivity Tips" },
- { "title": "Artificial Intelligence in Healthcare" },
- { "title": "🎮 Video Game Development Insights" }
### Chat History:
<chat_history>
{{MESSAGES:END:2}}
</chat_history>
/no_think
And here is a faster one with chat history limited to 2k tokens to improve title generation speed:
### Task:
Generate a concise, 3-5 word title with an emoji summarizing the chat history.
### Guidelines:
- The title should clearly represent the main theme or subject of the conversation.
- Use emojis that enhance understanding of the topic, but avoid quotation marks or special formatting.
- Write the title in the chat's primary language; default to English if multilingual.
- Prioritize accuracy over excessive creativity; keep it clear and simple.
### Output:
JSON format: { "title": "your concise title here" }
### Examples:
- { "title": "📉 Stock Market Trends" },
- { "title": "🍪 Perfect Chocolate Chip Recipe" },
- { "title": "Evolution of Music Streaming" },
- { "title": "Remote Work Productivity Tips" },
- { "title": "Artificial Intelligence in Healthcare" },
- { "title": "🎮 Video Game Development Insights" }
### Chat History:
<chat_history>
{{prompt:start:1000}}
{{prompt:end:1000}}
</chat_history>
/no_think
| 2025-05-07T13:08:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgwxeo/faster_open_webui_title_generation_for_qwen3/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgwxeo
| false | null |
t3_1kgwxeo
|
/r/LocalLLaMA/comments/1kgwxeo/faster_open_webui_title_generation_for_qwen3/
| false | false |
self
| 16 | null |
New guardrail benchmark
| 0 |
Tests guard models on 17 categories of harmful shit
Includes actual jailbreaks — not toy examples
Uses 3 top LLMs (Claude 3.5, Gemini 2, o3) to verify if outputs are actually harmful
Penalizes slow models — because safety shouldn’t mean waiting 12 seconds for “I’m sorry but I can’t help with that”
Check here [https://huggingface.co/blog/whitecircle-ai/circleguardbench](https://huggingface.co/blog/whitecircle-ai/circleguardbench)
| 2025-05-07T13:10:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgwyum/new_guardrail_benchmark/
|
Mysterious_Hearing14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgwyum
| false | null |
t3_1kgwyum
|
/r/LocalLLaMA/comments/1kgwyum/new_guardrail_benchmark/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'oC_FVRINpGSiEvT5xVuK6bXrIXTYyLK1e-YYeWsfn7s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=108&crop=smart&auto=webp&s=c969c93155165f76ff4e7193aab254e164e27c29', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=216&crop=smart&auto=webp&s=630b0a75f766d6e2089651c9bcdc600aab655d88', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=320&crop=smart&auto=webp&s=02c5a978d8d800001dbb3516981c861a6e5c9cbd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=640&crop=smart&auto=webp&s=7fc7ec689df95e86f07c45ed76f14fa5a3643fa4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=960&crop=smart&auto=webp&s=3c518ce3cd2395ea2ed0198621e323f1184a00a4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?width=1080&crop=smart&auto=webp&s=4c1ba45a3163782e061e25562ab0b7b3d15edc34', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Urad23-5e6NlFqn0MQAbgmO1RZ6-fT8nzWM47KQNO-Q.jpg?auto=webp&s=1dd3cf9cd463cb9ea630ef08396aae49750f7cbb', 'width': 1200}, 'variants': {}}]}
|
What’s the minimal text chunk size for natural-sounding TTS, and how can I minimize TTFB in a streaming pipeline?
| 1 |
[removed]
| 2025-05-07T13:25:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxaoc/whats_the_minimal_text_chunk_size_for/
|
jetsonjetearth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxaoc
| false | null |
t3_1kgxaoc
|
/r/LocalLLaMA/comments/1kgxaoc/whats_the_minimal_text_chunk_size_for/
| false | false |
self
| 1 | null |
Ollama vs Llama.cpp on 2x3090 and M3Max using qwen3-30b
| 44 |
Hi Everyone.
This is a comparison test between Ollama and Llama.cpp on 2 x RTX-3090 and M3-Max with 64GB using qwen3:30b-a3b-q8_0.
### Metrics
To ensure consistency, I used a custom Python script that sends requests to the server via the OpenAI-compatible API. Metrics were calculated as follows:
* Time to First Token (TTFT): Measured from the start of the streaming request to the first streaming event received.
* Prompt Processing Speed (PP): Number of prompt tokens divided by TTFT.
* Token Generation Speed (TG): Number of generated tokens divided by (total duration - TTFT).
The displayed results were truncated to two decimal places, but the calculations used full precision. I made the script to prepend 40% new material in the beginning of next longer prompt to avoid caching effect.
Here's my script for anyone interest. https://github.com/chigkim/prompt-test
It uses OpenAI API, so it should work in variety setup. Also, this tests one request at a time, so multiple parallel requests could result in higher throughput in different tests.
### Setup
Both use the same q8_0 model from Ollama library with flash attention. I'm sure you can optimize more, but I copied the flags from Ollama log, so both use the exactly same flags to load the model in order to keep it consistent.
`./build/bin/llama-server --model ~/.ollama/models/blobs/sha256... --ctx-size 36000 --batch-size 512 --n-gpu-layers 49 --verbose --threads 24 --flash-attn --parallel 1 --tensor-split 25,24 --port 11434`
* Llama.cpp: Commit 2f54e34
* Ollama: 0.6.8
Each row in the results represents a test (a specific combination of machine, engine, and prompt length). There are 4 tests per prompt length.
* Setup 1: 2xRTX3090, Llama.cpp
* Setup 2: 2xRTX3090, Ollama
* Setup 3: M3Max, Llama.cpp
* Setup 4: M3Max, Ollama
*Processing img xcmmuk1bycze1...*
| Machine | Engine | Prompt Tokens | PP/s | TTFT | Generated Tokens | TG/s | Duration |
| ------- | ------ | ------------- | ---- | ---- | ---------------- | ---- | -------- |
| RTX3090 | LCPP | 702 | 1663.57 | 0.42 | 1419 | 82.19 | 17.69 |
| RTX3090 | Ollama | 702 | 1595.04 | 0.44 | 1430 | 77.41 | 18.91 |
| M3Max | LCPP | 702 | 289.53 | 2.42 | 1485 | 55.60 | 29.13 |
| M3Max | Ollama | 702 | 288.32 | 2.43 | 1440 | 55.78 | 28.25 |
| RTX3090 | LCPP | 959 | 1768.00 | 0.54 | 1210 | 81.47 | 15.39 |
| RTX3090 | Ollama | 959 | 1723.07 | 0.56 | 1279 | 74.82 | 17.65 |
| M3Max | LCPP | 959 | 458.40 | 2.09 | 1337 | 55.28 | 26.28 |
| M3Max | Ollama | 959 | 459.38 | 2.09 | 1302 | 55.44 | 25.57 |
| RTX3090 | LCPP | 1306 | 1752.04 | 0.75 | 1108 | 80.95 | 14.43 |
| RTX3090 | Ollama | 1306 | 1725.06 | 0.76 | 1209 | 73.83 | 17.13 |
| M3Max | LCPP | 1306 | 455.39 | 2.87 | 1213 | 54.84 | 24.99 |
| M3Max | Ollama | 1306 | 458.06 | 2.85 | 1213 | 54.96 | 24.92 |
| RTX3090 | LCPP | 1774 | 1763.32 | 1.01 | 1330 | 80.44 | 17.54 |
| RTX3090 | Ollama | 1774 | 1823.88 | 0.97 | 1370 | 78.26 | 18.48 |
| M3Max | LCPP | 1774 | 320.44 | 5.54 | 1281 | 54.10 | 29.21 |
| M3Max | Ollama | 1774 | 321.45 | 5.52 | 1281 | 54.26 | 29.13 |
| RTX3090 | LCPP | 2584 | 1776.17 | 1.45 | 1522 | 79.39 | 20.63 |
| RTX3090 | Ollama | 2584 | 1851.35 | 1.40 | 1118 | 75.08 | 16.29 |
| M3Max | LCPP | 2584 | 445.47 | 5.80 | 1321 | 52.86 | 30.79 |
| M3Max | Ollama | 2584 | 447.47 | 5.77 | 1359 | 53.00 | 31.42 |
| RTX3090 | LCPP | 3557 | 1832.97 | 1.94 | 1500 | 77.61 | 21.27 |
| RTX3090 | Ollama | 3557 | 1928.76 | 1.84 | 1653 | 70.17 | 25.40 |
| M3Max | LCPP | 3557 | 444.32 | 8.01 | 1481 | 51.34 | 36.85 |
| M3Max | Ollama | 3557 | 442.89 | 8.03 | 1430 | 51.52 | 35.79 |
| RTX3090 | LCPP | 4739 | 1773.28 | 2.67 | 1279 | 76.60 | 19.37 |
| RTX3090 | Ollama | 4739 | 1910.52 | 2.48 | 1877 | 71.85 | 28.60 |
| M3Max | LCPP | 4739 | 421.06 | 11.26 | 1472 | 49.97 | 40.71 |
| M3Max | Ollama | 4739 | 420.51 | 11.27 | 1316 | 50.16 | 37.50 |
| RTX3090 | LCPP | 6520 | 1760.68 | 3.70 | 1435 | 73.77 | 23.15 |
| RTX3090 | Ollama | 6520 | 1897.12 | 3.44 | 1781 | 68.85 | 29.30 |
| M3Max | LCPP | 6520 | 418.03 | 15.60 | 1998 | 47.56 | 57.61 |
| M3Max | Ollama | 6520 | 417.70 | 15.61 | 2000 | 47.81 | 57.44 |
| RTX3090 | LCPP | 9101 | 1714.65 | 5.31 | 1528 | 70.17 | 27.08 |
| RTX3090 | Ollama | 9101 | 1881.13 | 4.84 | 1801 | 68.09 | 31.29 |
| M3Max | LCPP | 9101 | 250.25 | 36.37 | 1941 | 36.29 | 89.86 |
| M3Max | Ollama | 9101 | 244.02 | 37.30 | 1941 | 35.55 | 91.89 |
| RTX3090 | LCPP | 12430 | 1591.33 | 7.81 | 1001 | 66.74 | 22.81 |
| RTX3090 | Ollama | 12430 | 1805.88 | 6.88 | 1284 | 64.01 | 26.94 |
| M3Max | LCPP | 12430 | 280.46 | 44.32 | 1291 | 39.89 | 76.69 |
| M3Max | Ollama | 12430 | 278.79 | 44.58 | 1502 | 39.82 | 82.30 |
| RTX3090 | LCPP | 17078 | 1546.35 | 11.04 | 1028 | 63.55 | 27.22 |
| RTX3090 | Ollama | 17078 | 1722.15 | 9.92 | 1100 | 59.36 | 28.45 |
| M3Max | LCPP | 17078 | 270.38 | 63.16 | 1461 | 34.89 | 105.03 |
| M3Max | Ollama | 17078 | 270.49 | 63.14 | 1673 | 34.28 | 111.94 |
| RTX3090 | LCPP | 23658 | 1429.31 | 16.55 | 1039 | 58.46 | 34.32 |
| RTX3090 | Ollama | 23658 | 1586.04 | 14.92 | 1041 | 53.90 | 34.23 |
| M3Max | LCPP | 23658 | 241.20 | 98.09 | 1681 | 28.04 | 158.03 |
| M3Max | Ollama | 23658 | 240.64 | 98.31 | 2000 | 27.70 | 170.51 |
| RTX3090 | LCPP | 33525 | 1293.65 | 25.91 | 1311 | 52.92 | 50.69 |
| RTX3090 | Ollama | 33525 | 1441.12 | 23.26 | 1418 | 49.76 | 51.76 |
| M3Max | LCPP | 33525 | 217.15 | 154.38 | 1453 | 23.91 | 215.14 |
| M3Max | Ollama | 33525 | 219.68 | 152.61 | 1522 | 23.84 | 216.44 |
| 2025-05-07T13:33:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxhdt/ollama_vs_llamacpp_on_2x3090_and_m3max_using/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxhdt
| false | null |
t3_1kgxhdt
|
/r/LocalLLaMA/comments/1kgxhdt/ollama_vs_llamacpp_on_2x3090_and_m3max_using/
| false | false |
self
| 44 |
{'enabled': False, 'images': [{'id': 'Pg1SMki0VM1wl_M-tp7-lxTSraP6Ft7SF_TpxKyKAbI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=108&crop=smart&auto=webp&s=67bf981019eceb3c368ab7e99da0bd5cf9c3cfad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=216&crop=smart&auto=webp&s=7afb3fd23688378130c52ec4fed610ec67178996', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=320&crop=smart&auto=webp&s=6ac37e8e2a4a3046bfdbe11d3b5c28c133fa2ee2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=640&crop=smart&auto=webp&s=9d0eaf0680e6ba8366c95c578e1c907dcb6e0b13', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=960&crop=smart&auto=webp&s=7b39f1555f13bfe02ebe5080a8a57b7a39f8740c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?width=1080&crop=smart&auto=webp&s=a0e5b52383c576acc57cd6a0b01f06d8468691d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jEpnH-QC9dEsvA5zb-H_fvzkYVqQSq1gag95zzVvzPA.jpg?auto=webp&s=9c82b934f99be784891254b785ecb553bb9355a1', 'width': 1200}, 'variants': {}}]}
|
LLM for hacking
| 1 |
[removed]
| 2025-05-07T13:39:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxm9o/llm_for_hacking/
|
AdMajestic9148
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxm9o
| false | null |
t3_1kgxm9o
|
/r/LocalLLaMA/comments/1kgxm9o/llm_for_hacking/
| false | false |
self
| 1 | null |
Open source models playing WikiRace
| 1 |
[removed]
| 2025-05-07T13:41:09 |
https://v.redd.it/7atnmljc5dze1
|
PuppeteerWizard
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxnbb
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7atnmljc5dze1/DASHPlaylist.mpd?a=1749217285%2COGMxMzE5ODc3MWUzZjYyN2Q2N2FiMGI2NGM5OGY2YTBhNjI4MjhkOTFjOTUzZWMxMGYzNWQ1Y2QwMmE1MDVjMw%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/7atnmljc5dze1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/7atnmljc5dze1/HLSPlaylist.m3u8?a=1749217285%2CNzZmMzQyZWRkMjEzMTY5YmUzY2ZlMTA5ZGZhZTVlZTRmZDhiYTQyMDVhZDJiZmFiNDA5NzU5MjRmNTYzNDUwYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7atnmljc5dze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1278}}
|
t3_1kgxnbb
|
/r/LocalLLaMA/comments/1kgxnbb/open_source_models_playing_wikirace/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=108&crop=smart&format=pjpg&auto=webp&s=95b1c5f1c39e21d3e75a500aa50dde1fd2a4fbbf', 'width': 108}, {'height': 182, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=216&crop=smart&format=pjpg&auto=webp&s=c5293e2905df888f01a3b879caca6ce97fe11176', 'width': 216}, {'height': 270, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=320&crop=smart&format=pjpg&auto=webp&s=a589112439f29bd351172a8bf3aef95fa83ebde6', 'width': 320}, {'height': 540, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=640&crop=smart&format=pjpg&auto=webp&s=15ed5b737208d3a45ffdfe3ae47c552ffb0b80d2', 'width': 640}, {'height': 811, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=960&crop=smart&format=pjpg&auto=webp&s=faba45a6a4def98aaab6c10c8fbc0e6a3ff928ac', 'width': 960}, {'height': 912, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?width=1080&crop=smart&format=pjpg&auto=webp&s=66d2ef9e6fd9f62942f6bcba47527e612ebbb59e', 'width': 1080}], 'source': {'height': 2304, 'url': 'https://external-preview.redd.it/cjZwdnVoamM1ZHplMRXeg6MsawVG1AW18tmu7Cf0RykZ4bJTXsi28vu_aU1d.png?format=pjpg&auto=webp&s=8cd848f6e6b4a6900a838ac152646b995f56759a', 'width': 2726}, 'variants': {}}]}
|
|
What's the best model for image captioning right now?
| 2 |
InternVL3 is pretty good on average but still hallucinates way too much on my use case. I suppose finetuning could always be an option in theory but I have millions of images so trying to find out which ones it performs the worst with, then building a manual caption dataset and finally finetuning hoping the model actually improves without overfitting or catastrophically forgetting is going to be a *major* pain. Have there been any other models since?
| 2025-05-07T13:44:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxpww/whats_the_best_model_for_image_captioning_right/
|
BITE_AU_CHOCOLAT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxpww
| false | null |
t3_1kgxpww
|
/r/LocalLLaMA/comments/1kgxpww/whats_the_best_model_for_image_captioning_right/
| false | false |
self
| 2 | null |
Bought RTX 3090, need emotional support
| 1 |
[removed]
| 2025-05-07T13:51:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxvip/bought_rtx_3090_need_emotional_support/
|
HandsOnDyk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxvip
| false | null |
t3_1kgxvip
|
/r/LocalLLaMA/comments/1kgxvip/bought_rtx_3090_need_emotional_support/
| false | false |
self
| 1 | null |
2x RTX 3060 vs 1x RTX 5060 Ti — Need Advice!
| 5 |
I’m planning a GPU upgrade and could really use some advice. I’m considering either:
* **2x RTX 3060 (12GB VRAM each)** or
* **1x RTX 5060 Ti** **(16 VRAM)**
My current motherboard is a **Micro-ATX MSI B550M PRO-VDH**, and I’m wondering a few things:
1. **How hard is it to run a 2x GPU setup** in general? For AI workloads.
2. Will my motherboard even support both GPUs functionally (**Micro-ATX MSI B550M PRO-VDH**)?
3. From a performance and compatibility perspective, **which setup would you recommend**?
I’m mainly using the system for AI/deep learning experiments and light gaming.
Any insights or personal experiences would be really appreciated. Thanks in advance!
| 2025-05-07T13:56:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgxzsz/2x_rtx_3060_vs_1x_rtx_5060_ti_need_advice/
|
mr_house7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgxzsz
| false | null |
t3_1kgxzsz
|
/r/LocalLLaMA/comments/1kgxzsz/2x_rtx_3060_vs_1x_rtx_5060_ti_need_advice/
| false | false |
self
| 5 | null |
Mistral Medium 3 released on Le Chat
| 2 | 2025-05-07T14:13:19 |
https://mistral.ai/news/mistral-medium-3
|
Thomas-Lore
|
mistral.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgydtf
| false | null |
t3_1kgydtf
|
/r/LocalLLaMA/comments/1kgydtf/mistral_medium_3_released_on_le_chat/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]}
|
||
Mistral Medium 3 released
| 0 | 2025-05-07T14:14:11 |
https://mistral.ai/news/mistral-medium-3
|
Thomas-Lore
|
mistral.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgyejy
| false | null |
t3_1kgyejy
|
/r/LocalLLaMA/comments/1kgyejy/mistral_medium_3_released/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]}
|
||
Introducing Mistral Medium 3
| 0 |
[Medium is the new large. | Mistral AI](https://mistral.ai/news/mistral-medium-3)
| 2025-05-07T14:15:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgyfif/introducing_mistral_medium_3/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgyfif
| false | null |
t3_1kgyfif
|
/r/LocalLLaMA/comments/1kgyfif/introducing_mistral_medium_3/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
|
🌿🛤️ \[Release] MechanismPointsLLM & MechanismFlowLLM — Experiments in Leveraging the Flow of Language
| 0 |
Greetings, fellow travelers,
I come bearing two experimental architectures:
**MechanismPointsLLM** and **MechanismFlowLLM** —
two language models shaped by the spirit of **Mechanism Points** and the **Five Elements**.
They are not polished tools, but seeds scattered on the wind.
I have not yet tested them fully — they are **raw**, **untamed**, and **seeking their own form**.
Still, perhaps some among you will find value in walking alongside their path.
---
## 🧭 What They Are
* **MechanismPointsLLM**
A model that tries to *sense* critical leverage points inside sequences, and modulates its flow using learned elemental forces: **wood**, **fire**, **earth**, **metal**, and **water**.
* **MechanismFlowLLM**
A more *Daoist* architecture that gently detects mechanism points during attention, adapting its hidden dynamics through element gates without forcing outcomes.
Both models are an attempt to step away from the purely mechanical, and instead **dance with the hidden structure of change**.
---
## 🍃 Key Ideas
* **Mechanism Awareness**:
Some words matter more than others. Detect and honor them.
* **Five Elements Transformations**:
At every step, blend expansion, acceleration, stabilization, refinement, and adaptation.
* **Custom Tokenizer**:
Built to *notice* semantic boundaries, not just slice words statistically.
* **Mechanism-Aware Training**:
The optimizer itself responds to detected leverage points, like a river responding to the shape of stones.
* **Full Local Model**:
PyTorch-based. Runs on a single GPU. No HF dependency. Everything happens in your own little grove.
---
## 📜 Disclaimers
* **I have not tested the full training yet.**
These architectures are visions woven from careful thought, but they have not yet been hardened in the fire of long training.
* **Expect rough edges.**
Like an uncarved block (pu, 樸), the models are simple, but within them lies potential.
* **You may find strange results.**
Or hidden treasures.
---
## 🌌 Why Mechanism Points?
Because in every system, there are moments where small shifts create vast transformations.
Finding them is wisdom.
Acting with them is art.
In language, these are the tokens, the gestures, the subtle pivots that turn streams of meaning.
---
## 📖 Philosophy
The Dao moves not through force, but through alignment with what is.
In that spirit, these models are not meant to "control" text, but to **flow with it** —
to **transform** with awareness, not domination.
---
## 🛠️ Code
https://github.com/Maximilian-Winter/DaoDeCode
Licensed under **Apache 2.0**.
Free for all good purposes. 🌿
---
## 🧙♂️ If You Walk This Path...
* You may need to adjust, prune, or graft.
* You may find new architectures hidden inside.
* You may plant new seeds.
If you do, I'd love to hear of your journey.
---
*(The mechanism points are sharpest when the mind is quiet.)* 🌾🛡️🌊🔥🪨🌳
# #LocalLLaMA #MechanismPoints #Flow #OpenSource #ExperimentalLLM
| 2025-05-07T14:27:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgyp7u/release_mechanismpointsllm_mechanismflowllm/
|
FlowerPotTeaTime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgyp7u
| false | null |
t3_1kgyp7u
|
/r/LocalLLaMA/comments/1kgyp7u/release_mechanismpointsllm_mechanismflowllm/
| false | false |
self
| 0 | null |
After a year of hard work, the first models that run in a decentralized environment are usable (but still somewhat slow) on the Arbius playground. Qwen QwQ 32B and WAI SDXL (NSFW). Come help test the environment so we find all the bugs quicker
| 1 | 2025-05-07T14:37:30 |
https://arbiusplayground.com/chat
|
youdidnotcheckmyname
|
arbiusplayground.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgyyao
| false | null |
t3_1kgyyao
|
/r/LocalLLaMA/comments/1kgyyao/after_a_year_of_hard_work_the_first_models_that/
| false | false |
nsfw
| 1 | null |
|
Is there anyone doing mad scientist style social engineering experiments with LLMs?
| 1 |
[removed]
| 2025-05-07T14:43:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgz3dk/is_there_anyone_doing_mad_scientist_style_social/
|
Cannavor
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgz3dk
| false | null |
t3_1kgz3dk
|
/r/LocalLLaMA/comments/1kgz3dk/is_there_anyone_doing_mad_scientist_style_social/
| false | false |
self
| 1 | null |
Llmlingua in js?
| 1 |
[removed]
| 2025-05-07T14:45:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgz4sa/llmlingua_in_js/
|
letaem
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgz4sa
| false | null |
t3_1kgz4sa
|
/r/LocalLLaMA/comments/1kgz4sa/llmlingua_in_js/
| false | false |
self
| 1 | null |
What hardware to use for home llm server?
| 0 |
I want to build a home server for home assistant and also be able to run local llms. I plan to use two rtx306012 gb. What do you think?
| 2025-05-07T14:45:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgz4xi/what_hardware_to_use_for_home_llm_server/
|
Organic_Farm_2093
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgz4xi
| false | null |
t3_1kgz4xi
|
/r/LocalLLaMA/comments/1kgz4xi/what_hardware_to_use_for_home_llm_server/
| false | false |
self
| 0 | null |
What’s Your Current Daily Driver Model and Setup?
| 14 |
Hey Local gang,
What's your daily driver model these days? Would love to hear about your go to setups, preferred models + quants, and use cases. Just curious to know what's working well for everyone and find some new inspiration!
**My current setup:**
* **Interface:** Ollama + OWUI
* **Models:** Gemma3:27b-fp16 and Qwen3:32b-fp16
* **Hardware:** 4x RTX 3090s + Threadripper 3975WX + 256GB DDR4
* **Use Case:** Enriching scraped data with LLMs for insight extraction and opportunity detection
Thanks for sharing!
| 2025-05-07T14:52:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgzb0c/whats_your_current_daily_driver_model_and_setup/
|
jedsk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzb0c
| false | null |
t3_1kgzb0c
|
/r/LocalLLaMA/comments/1kgzb0c/whats_your_current_daily_driver_model_and_setup/
| false | false |
self
| 14 | null |
Run FLUX.1 losslessly on a GPU with 20GB VRAM
| 137 |
We've released **losslessly compressed versions** of the **12B FLUX.1-dev** and **FLUX.1-schnell** models using **DFloat11**, a compression method that applies entropy coding to BFloat16 weights. This reduces model size by **\~30%** *without changing outputs*.
This brings the models down from **24GB to \~16.3GB**, enabling them to run on a **single GPU with 20GB or more of VRAM**, with only a **few seconds of extra overhead per image**.
# 🔗 Downloads & Resources
* **Compressed FLUX.1-dev**: [huggingface.co/DFloat11/FLUX.1-dev-DF11](https://huggingface.co/DFloat11/FLUX.1-dev-DF11)
* **Compressed FLUX.1-schnell**: [huggingface.co/DFloat11/FLUX.1-schnell-DF11](https://huggingface.co/DFloat11/FLUX.1-schnell-DF11)
* **Example Code**: [github.com/LeanModels/DFloat11/tree/master/examples/flux.1](https://github.com/LeanModels/DFloat11/tree/master/examples/flux.1)
* **Compressed LLMs (Qwen 3, Gemma 3, etc.)**: [huggingface.co/DFloat11](https://huggingface.co/DFloat11)
* **Research Paper**: [arxiv.org/abs/2504.11651](https://arxiv.org/abs/2504.11651)
**Feedback welcome**! Let me know if you try them out or run into any issues!
| 2025-05-07T14:57:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgzey8/run_flux1_losslessly_on_a_gpu_with_20gb_vram/
|
arty_photography
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzey8
| false | null |
t3_1kgzey8
|
/r/LocalLLaMA/comments/1kgzey8/run_flux1_losslessly_on_a_gpu_with_20gb_vram/
| false | false |
self
| 137 |
{'enabled': False, 'images': [{'id': 'Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=108&crop=smart&auto=webp&s=e049070f7902e23788ed45d55019bb026cdda882', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=216&crop=smart&auto=webp&s=d7b1c07741666a54030deae764a375cb0aea95fb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=320&crop=smart&auto=webp&s=7c1964be5daa8bfeb3818d7e594a6ce78906483b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=640&crop=smart&auto=webp&s=14b7c1fa87a38ad59907e907ccd669cae70b04f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=960&crop=smart&auto=webp&s=12bf746b46d7aa6584cd8adc9d253bcfcdfb4a15', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?width=1080&crop=smart&auto=webp&s=5f78c6570cac86d60ea758e8ea04b27d51b2c581', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Wox_S9sbYW0Xmx9U1fJdMhjinT4PLJ7U6VfKKeagu80.png?auto=webp&s=80624371a9a2d1dddc4dad16884c408aed7c93c0', 'width': 1200}, 'variants': {}}]}
|
Are most of the benchmarks here useless in reality life?
| 0 |
I see a lot of benchmarks here regarding tokens per second. But for me it's totally unimportant if a hardware setup runs at 20, 30, 50, or 180 t/s because the limiting factor is me reading slower than 20 t/s. So what's the deal with all these benchmarks? Just for fun to see whether a 3090 can beat a M4max?
| 2025-05-07T15:02:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgzjzh/are_most_of_the_benchmarks_here_useless_in/
|
ekultrok
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzjzh
| false | null |
t3_1kgzjzh
|
/r/LocalLLaMA/comments/1kgzjzh/are_most_of_the_benchmarks_here_useless_in/
| false | false |
self
| 0 | null |
Mistral-Medium 3 (unfortunately no local support so far)
| 93 | 2025-05-07T15:12:17 |
https://mistral.ai/news/mistral-medium-3
|
pier4r
|
mistral.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzskq
| false | null |
t3_1kgzskq
|
/r/LocalLLaMA/comments/1kgzskq/mistralmedium_3_unfortunately_no_local_support_so/
| false | false | 93 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]}
|
||
New mistral model benchmarks
| 490 | 2025-05-07T15:16:25 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzwe9
| false | null |
t3_1kgzwe9
|
/r/LocalLLaMA/comments/1kgzwe9/new_mistral_model_benchmarks/
| false | false | 490 |
{'enabled': True, 'images': [{'id': '3mtfxG4tSCpxS1_u28qFfjLl1X8KtvbgnjFFNCELPB4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=108&crop=smart&auto=webp&s=92b96bbe9f201ee026fb4944b6d3a0a5a7c3830f', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=216&crop=smart&auto=webp&s=859d84ae92253142dec9cc23b091183c2eabd5de', 'width': 216}, {'height': 322, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=320&crop=smart&auto=webp&s=156c0f6454e2a1387dc9c88edd6f23754ad31822', 'width': 320}, {'height': 644, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=640&crop=smart&auto=webp&s=7a47a4a215c33b3670819e5b09e20d25a73074d7', 'width': 640}, {'height': 966, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=960&crop=smart&auto=webp&s=92b5f62aa0b1a76b1846bf62654820f9111bbc19', 'width': 960}, {'height': 1086, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?width=1080&crop=smart&auto=webp&s=37e92f134a54ea17ce0cce1515ef199a38223b0c', 'width': 1080}], 'source': {'height': 1566, 'url': 'https://preview.redd.it/hrtrvrvnmdze1.jpeg?auto=webp&s=1542bcfc92295fd91618d1d1f901239f60751d01', 'width': 1556}, 'variants': {}}]}
|
|||
Uncensored erotica with a 4080 and 64gb system RAM?
| 1 |
[removed]
| 2025-05-07T15:18:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kgzyks/uncensored_erotica_with_a_4080_and_64gb_system_ram/
|
wtfislandfill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kgzyks
| false | null |
t3_1kgzyks
|
/r/LocalLLaMA/comments/1kgzyks/uncensored_erotica_with_a_4080_and_64gb_system_ram/
| false | false |
nsfw
| 1 | null |
Hardware Advice for Running a Local 30B Model
| 1 |
[removed]
| 2025-05-07T15:35:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh0dg8/hardware_advice_for_running_a_local_30b_model/
|
Quirky_Mess3651
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh0dg8
| false | null |
t3_1kh0dg8
|
/r/LocalLLaMA/comments/1kh0dg8/hardware_advice_for_running_a_local_30b_model/
| false | false |
self
| 1 | null |
Cracking 40% on SWE-bench verified with open source models & agents & open-source synth data
| 303 |
We all know that finetuning & RL work great for getting great LMs for agents -- the problem is where to get the training data!
We've generated 50k+ task instances for 128 popular GitHub repositories, then trained our own LM for SWE-agent. The result? We achieve 40% pass@1 on SWE-bench Verified -- a new SoTA among open source models.
We've open-sourced *everything*, and we're excited to see what you build with it! This includes the agent (SWE-agent), the framework used to generate synthetic task instances (SWE-smith), and our fine-tuned LM (SWE-agent-LM-32B)
| 2025-05-07T15:39:46 |
klieret
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh0hcd
| false | null |
t3_1kh0hcd
|
/r/LocalLLaMA/comments/1kh0hcd/cracking_40_on_swebench_verified_with_open_source/
| false | false | 303 |
{'enabled': True, 'images': [{'id': 'P6yopi0D7iay4zSdbmcG3fi2N_czGzhKOaOp5t-tN0Y', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/4lwtc2sgpdze1.png?width=108&crop=smart&auto=webp&s=fc5f8139e125cf21f3228ac2fd47b5dda0bca7ed', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/4lwtc2sgpdze1.png?width=216&crop=smart&auto=webp&s=3325540610abfbd8e600f85db28a7fb809b607e8', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/4lwtc2sgpdze1.png?width=320&crop=smart&auto=webp&s=d283e1764a37c09aa72504924041f690d9cb21ed', 'width': 320}, {'height': 422, 'url': 'https://preview.redd.it/4lwtc2sgpdze1.png?width=640&crop=smart&auto=webp&s=4f581dfebc0968cbf87949bad4b08918a6afa989', 'width': 640}], 'source': {'height': 584, 'url': 'https://preview.redd.it/4lwtc2sgpdze1.png?auto=webp&s=a73b9e574a3f5d2c6b086114e8046bb3ab1beeb2', 'width': 884}, 'variants': {}}]}
|
||
Seeking Co-Founder (RAG + AI Expert) for Funded SaaS Startup – Neuraltalk AI (neuraltalk.ai)
| 1 |
[removed]
| 2025-05-07T15:47:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh0o5u/seeking_cofounder_rag_ai_expert_for_funded_saas/
|
Overall_Search_3163
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh0o5u
| false | null |
t3_1kh0o5u
|
/r/LocalLLaMA/comments/1kh0o5u/seeking_cofounder_rag_ai_expert_for_funded_saas/
| false | false |
self
| 1 | null |
How can I run AI models with intel graphics xe?
| 1 |
[removed]
| 2025-05-07T16:01:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh10ww/how_can_i_run_ai_models_with_intel_graphics_xe/
|
No_Farmer_495
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh10ww
| false | null |
t3_1kh10ww
|
/r/LocalLLaMA/comments/1kh10ww/how_can_i_run_ai_models_with_intel_graphics_xe/
| false | false |
self
| 1 | null |
GMKtec EVO-X2 First Benchmarks are out
| 1 |
[removed]
| 2025-05-07T16:03:27 |
https://youtu.be/UXjg6Iew9lg?si=spkA1cFgKnyQ6lHB
|
cougz7
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh12ou
| false |
{'oembed': {'author_name': 'jack stone', 'author_url': 'https://www.youtube.com/@jackstone', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/UXjg6Iew9lg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="满血Qwen3 235B本地部署15t/s!Stable Diffusion 3.5 Large文生图本地部署!128G内存8060S最强核显!极摩客EVO-X2 AI Max+ 395迷你主机评测!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/UXjg6Iew9lg/hqdefault.jpg', 'thumbnail_width': 480, 'title': '满血Qwen3 235B本地部署15t/s!Stable Diffusion 3.5 Large文生图本地部署!128G内存8060S最强核显!极摩客EVO-X2 AI Max+ 395迷你主机评测!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kh12ou
|
/r/LocalLLaMA/comments/1kh12ou/gmktec_evox2_first_benchmarks_are_out/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=108&crop=smart&auto=webp&s=22b958fe376f7e1c3411beeeb6d117ae190a2ee0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=216&crop=smart&auto=webp&s=4a09b8f2a4209bdd6fd3daf8cbe00bb30e20291a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?width=320&crop=smart&auto=webp&s=fe66170dbcf8dbf0a94a369c7b2b8dc1edf4c0f6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qH485SJx4calush0vagqjiZmzvZENloL6dDm3U4FBkU.jpg?auto=webp&s=3d90d6a76784db8069aadc76c3685ebb14858a1c', 'width': 480}, 'variants': {}}]}
|
|
What happened with aider leaderboard? Not updated for so long
| 3 |
Is Paul OK?
| 2025-05-07T16:04:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh13ip/what_happened_with_aider_leaderboard_not_updated/
|
robertpiosik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh13ip
| false | null |
t3_1kh13ip
|
/r/LocalLLaMA/comments/1kh13ip/what_happened_with_aider_leaderboard_not_updated/
| false | false |
self
| 3 | null |
Question re: enterprise use of LLM
| 0 |
Hello,
I'm interested in running an LLM, something like Qwen 3 - 235B at 8bits, on a server and allow access to the server to employees. I'm not sure it makes sense to have a dedicated VM we pay for monthly, but rather have a serverless model.
On my local machine I run LM Studio but what I want is something that does the following:
- Receives and batches requests from users. I imagine at first we'll just have sufficient VRAM to run a forward pass at a time, so we would have to process each request individually as they come in.
- Searches for relevant information. I understand this is the harder point. I doubt we can RAG all our data. Is there a way to have semantic search be run automatically and add context to the context window? I assume there must be a way to have a data connector to our data, it will all be through the same cloud provider.
- web search. I'm not particularly aware of a way to do this. If it's not possible that's ok, we also have an enterprise license to OpenAI so this is separate in many ways.
| 2025-05-07T16:09:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh18h9/question_re_enterprise_use_of_llm/
|
chespirito2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh18h9
| false | null |
t3_1kh18h9
|
/r/LocalLLaMA/comments/1kh18h9/question_re_enterprise_use_of_llm/
| false | false |
self
| 0 | null |
AI outputs how it would write the story, not the actual story I want it to write. What am I doing wrong?
| 1 |
[removed]
| 2025-05-07T17:19:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh2yyq/ai_outputs_how_it_would_write_the_story_not_the/
|
wtfislandfill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh2yyq
| false | null |
t3_1kh2yyq
|
/r/LocalLLaMA/comments/1kh2yyq/ai_outputs_how_it_would_write_the_story_not_the/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=108&crop=smart&auto=webp&s=3fece9e110583032582fc46a503ab711922bbd71', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=216&crop=smart&auto=webp&s=7195e52df6c5d3d449de5ea3259a8968367f318d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=320&crop=smart&auto=webp&s=f8eb80e941e99cad050504e48bab4c0ac8d0d9d0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=640&crop=smart&auto=webp&s=5de14d003eb1133a65f66cb7f1ebc1e7ccb8cc6c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=960&crop=smart&auto=webp&s=a638685287afefde0961f570b58622be84a86792', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?width=1080&crop=smart&auto=webp&s=0766f528fa1abbfdab17076a95adb4d823765166', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yjZWdf39UiEnRJPe33QGccn4kwjknWjyeepPJB310F0.png?auto=webp&s=a95275bf78fd9169410709857fdeaceb2de7ab6c', 'width': 1200}, 'variants': {}}]}
|
Language diffusion document unredaction?
| 0 |
This seems like the perfect use case for language diffusion models- am I crazy?
| 2025-05-07T17:30:23 |
mnt_brain
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh38s3
| false | null |
t3_1kh38s3
|
/r/LocalLLaMA/comments/1kh38s3/language_diffusion_document_unredaction/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'RCQFx0ENgTP36Xgceojnci4GFsZuEvWAKqdhhUkv5xo', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=108&crop=smart&auto=webp&s=2dd7e081e222567f58664f8b3d20399b9e3c5251', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=216&crop=smart&auto=webp&s=2b24a912d11768749e36e9ab4e9dce0e959bcc95', 'width': 216}, {'height': 331, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=320&crop=smart&auto=webp&s=b2ce239e8dd7c432aa3e00115ff3564505b629ef', 'width': 320}, {'height': 663, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=640&crop=smart&auto=webp&s=81a4ca933e6391db8e1d7c9262fe13ed5e1c21f8', 'width': 640}, {'height': 995, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=960&crop=smart&auto=webp&s=9aec7a29fbdc7f1374c5095f7b6865a969df6606', 'width': 960}, {'height': 1120, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?width=1080&crop=smart&auto=webp&s=22669e751b7d1f80963371ba771bf0769352cf68', 'width': 1080}], 'source': {'height': 1369, 'url': 'https://preview.redd.it/bau4ckikaeze1.jpeg?auto=webp&s=e64ab83050709bdad0cd1c8d695ce71fbd4401a3', 'width': 1320}, 'variants': {}}]}
|
||
Did anyone try out Mistral Medium 3?
| 114 |
I briefly tried Mistral Medium 3 on OpenRouter, and I feel its performance might not be as good as Mistral's blog claims. (The video shows the best result out of the 5 shots I ran. )
Additionally, I tested having it recognize and convert the benchmark image from the blog into JSON. However, it felt like it was just randomly converting things, and not a single field matched up. Could it be that its input resolution is very low, causing compression and therefore making it unable to recognize the text in the image?
Also, I don't quite understand why it uses 5-shot in the GPTQ diamond and MMLU Pro benchmarks. Is that the default number of shots for these tests?
| 2025-05-07T17:38:36 |
https://v.redd.it/6w9w0rl2beze1
|
Dr_Karminski
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh3g7f
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6w9w0rl2beze1/DASHPlaylist.mpd?a=1749231530%2CZTY2MjRmM2EwYzExOTllMjBiZGRjZjg3N2U5NjY0YzdkOGZjZmI0ZTBlMzRlNjAyNWQ5YjEwZGQyMWJiZmYzNQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/6w9w0rl2beze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6w9w0rl2beze1/HLSPlaylist.m3u8?a=1749231530%2CNTVkOWRhZWJkZDlhMWE5NmQ1ODg3ZGRkODM2YzZmZDlkYmYyNzczYzI2YTMwOGQyZjUzZDlmMDQ2NGJhYzE2Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6w9w0rl2beze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kh3g7f
|
/r/LocalLLaMA/comments/1kh3g7f/did_anyone_try_out_mistral_medium_3/
| false | false | 114 |
{'enabled': False, 'images': [{'id': 'Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=108&crop=smart&format=pjpg&auto=webp&s=0deb3eefd9af1360653eab51820217505e5df37e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=216&crop=smart&format=pjpg&auto=webp&s=b7f784167fd02bd9208fe0b34db8f99923205287', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=320&crop=smart&format=pjpg&auto=webp&s=cf7da030a6041ff696ec1ff5ba3cbdd4060a9a1d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=640&crop=smart&format=pjpg&auto=webp&s=013b4d40d57c3f673911293573fd59625e1537a1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=960&crop=smart&format=pjpg&auto=webp&s=00760ded3ba6a30158aa700e78489c5257a8bb23', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4945387d0de79a72d992523d87d07aa9f1cede9c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Z3k2eW51bjJiZXplMcUDN_ixsC3ErmhxAmaSJ8XxFt_ddYdhD_A2seyNDJhw.png?format=pjpg&auto=webp&s=4c4bcdfb607cab7d1cc4ffe80e87ad21392ce2ba', 'width': 1920}, 'variants': {}}]}
|
|
LLMs play Wikipedia Race
| 1 |
[https://huggingface.co/spaces/HuggingFaceTB/wikiracing-llms](https://huggingface.co/spaces/HuggingFaceTB/wikiracing-llms)
| 2025-05-07T18:21:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4jg6/llms_play_wikipedia_race/
|
loubnabnl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4jg6
| false | null |
t3_1kh4jg6
|
/r/LocalLLaMA/comments/1kh4jg6/llms_play_wikipedia_race/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=108&crop=smart&auto=webp&s=90b29b0e88d13821dda8aa52a73954d4a953b073', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=216&crop=smart&auto=webp&s=9802741c81f2dc46d7614d780d18598a5eb3fa50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=320&crop=smart&auto=webp&s=f9349119f3ad7a9aa4d46d15a9032921c95b01c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=640&crop=smart&auto=webp&s=6fd8493f4cc69226985b23e4e6a1b44fc2547926', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=960&crop=smart&auto=webp&s=d29c95f18f3aff44e0e062546f989bd2b21f520f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=1080&crop=smart&auto=webp&s=185b9d7dd3dfca333e9dfa6935e55cf47a369633', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?auto=webp&s=aed5d83bf5716437d39d3cbc27d88763dad97842', 'width': 1200}, 'variants': {}}]}
|
Where are you hosting your fine tuned model?
| 0 |
Say I have a fine tuned model, which I want to host for inference. Which provider would you recommend?
As an indie developer (making https://saral.club if anyone is interested), I can't go for self hosting gpu, as it's a huge upfront investment (even the T4 series).
| 2025-05-07T18:21:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4joc/where_are_you_hosting_your_fine_tuned_model/
|
m_o_n_t_e
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4joc
| false | null |
t3_1kh4joc
|
/r/LocalLLaMA/comments/1kh4joc/where_are_you_hosting_your_fine_tuned_model/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'nGOwsSk6OargUnA7UuqrQQwDrFzZh8iYf_J8BRiZ4HI', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=108&crop=smart&auto=webp&s=9e61655f473f086340a90499c679e43ccff061ec', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=216&crop=smart&auto=webp&s=ddddd70e1e0d4f01fbdec04877a67085f3c7bf90', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=320&crop=smart&auto=webp&s=344bbf40f46a002aba6c9262f2c45e6c50cbfff8', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=640&crop=smart&auto=webp&s=444d2b776ae7b5e831514a2bd5353099f8f65ce1', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=960&crop=smart&auto=webp&s=815d5f21f2c0360dfb42e8317e22e7700011de56', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?width=1080&crop=smart&auto=webp&s=8dfb7ede9ce8ea4ba96677d2e6402984072bf6dd', 'width': 1080}], 'source': {'height': 974, 'url': 'https://external-preview.redd.it/PcAnrbOs9ic5AqbcdJX1U1y7qMlCBkdZib4V7ZeIxD8.jpg?auto=webp&s=dfbb7e78577d7d8115122a7141b754fd70375dca', 'width': 1580}, 'variants': {}}]}
|
LLMs play Wikipedia race
| 19 | 2025-05-07T18:23:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4lbl/llms_play_wikipedia_race/
|
loubnabnl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4lbl
| false | null |
t3_1kh4lbl
|
/r/LocalLLaMA/comments/1kh4lbl/llms_play_wikipedia_race/
| false | false | 19 |
{'enabled': False, 'images': [{'id': 'afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=108&crop=smart&auto=webp&s=90b29b0e88d13821dda8aa52a73954d4a953b073', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=216&crop=smart&auto=webp&s=9802741c81f2dc46d7614d780d18598a5eb3fa50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=320&crop=smart&auto=webp&s=f9349119f3ad7a9aa4d46d15a9032921c95b01c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=640&crop=smart&auto=webp&s=6fd8493f4cc69226985b23e4e6a1b44fc2547926', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=960&crop=smart&auto=webp&s=d29c95f18f3aff44e0e062546f989bd2b21f520f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?width=1080&crop=smart&auto=webp&s=185b9d7dd3dfca333e9dfa6935e55cf47a369633', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/afJBw1uQ95TJ6CIORq1av9mj6DnDp__NkJYkqs5ol3g.png?auto=webp&s=aed5d83bf5716437d39d3cbc27d88763dad97842', 'width': 1200}, 'variants': {}}]}
|
||
Building a machine for local llm use for researchers have some questions
| 1 |
[removed]
| 2025-05-07T18:24:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4mcf/building_a_machine_for_local_llm_use_for/
|
much_prof_eduit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4mcf
| false | null |
t3_1kh4mcf
|
/r/LocalLLaMA/comments/1kh4mcf/building_a_machine_for_local_llm_use_for/
| false | false |
self
| 1 | null |
No Single Open-source Model Can Compete with even gpt-4o-2024-08-06!!!
| 0 |
This is insane: I am testing a RAG system, trying different series of LLM models for rephrasing, reranking, answering, etc., from closed ChatGPT and open-source Ollama, including gemma3 series, qwen3 series (I tried 235b), deepseek-r1 series, llama3.3, phi4, etc.
Unfortunately, I found a crucial reality: NO SINGLE OPEN-SOURCE MODEL CAN COMPETE WITH EVEN gpt-4o-2024-08-06!!! The open-source models either respond crazily slow or hallucinate or just do not follow what I asked. They are all garbage. Feeling helpless. Can't we just have access to an open-source model that can compete with the GPT-4o released more than 9 months ago? In terms of performance and response speed?
PLEASE PLEASE PLEASE prove me wrong. I wish I was wrong. Today is May 7th, 2025. OMG.
| 2025-05-07T18:35:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4w9p/no_single_opensource_model_can_compete_with_even/
|
Great-Reception447
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4w9p
| false | null |
t3_1kh4w9p
|
/r/LocalLLaMA/comments/1kh4w9p/no_single_opensource_model_can_compete_with_even/
| false | false |
self
| 0 | null |
Simplest way to ask llm to build query on 200 fields table
| 1 |
[removed]
| 2025-05-07T18:36:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh4wip/simplest_way_to_ask_llm_to_build_query_on_200/
|
Unlikely_Anybody786
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh4wip
| false | null |
t3_1kh4wip
|
/r/LocalLLaMA/comments/1kh4wip/simplest_way_to_ask_llm_to_build_query_on_200/
| false | false |
self
| 1 | null |
Qwen 3 evaluations
| 271 |
Finally finished my extensive Qwen 3 evaluations across a range of formats and quantisations, focusing on MMLU-Pro (Computer Science).
A few take-aways stood out - especially for those interested in local deployment and performance trade-offs:
1️⃣ Qwen3-235B-A22B (via Fireworks API) tops the table at 83.66% with ~55 tok/s.
2️⃣ But the 30B-A3B Unsloth quant delivered 82.20% while running locally at ~45 tok/s and with zero API spend.
3️⃣ The same Unsloth build is ~5x faster than Qwen's Qwen3-32B, which scores 82.20% as well yet crawls at <10 tok/s.
4️⃣ On Apple silicon, the 30B MLX port hits 79.51% while sustaining ~64 tok/s - arguably today's best speed/quality trade-off for Mac setups.
5️⃣ The 0.6B micro-model races above 180 tok/s but tops out at 37.56% - that's why it's not even on the graph (50 % performance cut-off).
All local runs were done with @lmstudio on an M4 MacBook Pro, using Qwen's official recommended settings.
Conclusion: Quantised 30B models now get you ~98 % of frontier-class accuracy - at a fraction of the latency, cost, and energy. For most local RAG or agent workloads, they're not just good enough - they're the new default.
Well done, @Alibaba_Qwen - you really whipped the llama's ass! And to @OpenAI: for your upcoming open model, please make it MoE, with toggleable reasoning, and release it in many sizes. This is the future!
Source: https://x.com/wolframrvnwlf/status/1920186645384478955?s=46
| 2025-05-07T18:48:14 |
ResearchCrafty1804
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh579e
| false | null |
t3_1kh579e
|
/r/LocalLLaMA/comments/1kh579e/qwen_3_evaluations/
| false | false |
default
| 271 |
{'enabled': True, 'images': [{'id': '8f8g366goeze1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=108&crop=smart&auto=webp&s=b8e8a247e770308fb66311ff7809f66cbe76f59a', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=216&crop=smart&auto=webp&s=8178e9e88cbc6a5b711341c052130dc2f9916897', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=320&crop=smart&auto=webp&s=ce49a95335cac8ff0a8be123eafd3201592fab09', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=640&crop=smart&auto=webp&s=dd68bf0ab81adb00446d201fbee1d90070c68389', 'width': 640}, {'height': 508, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=960&crop=smart&auto=webp&s=25d79190fbe692c718d63f6ad010381b7c759777', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?width=1080&crop=smart&auto=webp&s=b448c0926ac2d8730e41de761d1e0bdaefc6d673', 'width': 1080}], 'source': {'height': 2170, 'url': 'https://preview.redd.it/8f8g366goeze1.jpeg?auto=webp&s=487ed0e431937d529fa33c77aec021a57c525c25', 'width': 4096}, 'variants': {}}]}
|
|
Speeds of LLMs running on an AMD AI Max+ 395 128GB.
| 31 |
Here's a YouTube video where the creators runs a variety of LLM models on an HP G1A. That has a power limited version of the AMD AI Max+ 395. From the video you can see the GPU uses 70 watts. ETA Prime has shown that the yet to be revealed mini-pc he's using can go up to 120-130 watts. The numbers seen on this video are not memory bandwidth limited, so they must be compute limited. Thus the extra TDP of the mini-pc version of the Max+ should allow it to have more compute and the LLMs should have a higher token count.
The tests this person does are less than ideal. He's using ollama and really short prompts and thus short context. But it is what it is. Also, he's seeing that the system RAM use matches the GPU RAM use when he loads a model and thus that's limiting him to 64GB of "VRAM". I wonder how old the version of llama.cpp that ollama is using. Since that was a problem with llama.cpp. I've complained about it in the past. But that was months ago and has since been fixed.
Anyways. Enjoy.
https://www.youtube.com/watch?v=-HJ-VipsuSk
| 2025-05-07T18:54:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh5cyt/speeds_of_llms_running_on_an_amd_ai_max_395_128gb/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5cyt
| false | null |
t3_1kh5cyt
|
/r/LocalLLaMA/comments/1kh5cyt/speeds_of_llms_running_on_an_amd_ai_max_395_128gb/
| false | false |
self
| 31 | null |
Beelink Launches GTR9 Pro And GTR9 AI Mini PCs, Featuring AMD Ryzen AI Max+ 395 And Up To 128 GB RAM
| 41 | 2025-05-07T18:57:06 |
https://wccftech.com/beelink-launches-gtr9-pro-and-gtr9-mini-pcs/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5f4q
| false | null |
t3_1kh5f4q
|
/r/LocalLLaMA/comments/1kh5f4q/beelink_launches_gtr9_pro_and_gtr9_ai_mini_pcs/
| false | false |
default
| 41 |
{'enabled': False, 'images': [{'id': '5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=108&crop=smart&auto=webp&s=2206d5f9d8f64a54a3b2695b3b642b8cb6e6ccbb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=216&crop=smart&auto=webp&s=ce3260ad48bc04466ebe63cdaf162d55e3fc9c99', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=320&crop=smart&auto=webp&s=0e3a75474e15979b304b764a77406fc12cd0d035', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=640&crop=smart&auto=webp&s=0822aa3d4e06f9272257151f7579577e8ecd98ba', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=960&crop=smart&auto=webp&s=cc29b4608724447a238b1a744e4f369aa8094f29', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?width=1080&crop=smart&auto=webp&s=6ea7f7489dcfa0b686434185b0363f5fa5162624', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5PkORNAHF4mCCL5YAzubG_UIVEwPjYPSSg6fvkNboCI.png?auto=webp&s=5743bb560c4bbeb5226c360f119c924adbf70147', 'width': 1920}, 'variants': {}}]}
|
|
Opinion of best model for common conversations
| 1 |
[removed]
| 2025-05-07T19:01:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh5j1z/opinion_of_best_model_for_common_conversations/
|
AltruisticRoom1093
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5j1z
| false | null |
t3_1kh5j1z
|
/r/LocalLLaMA/comments/1kh5j1z/opinion_of_best_model_for_common_conversations/
| false | false |
self
| 1 | null |
Trying out the Ace-Step Song Generation Model
| 38 |
So, I got Gemini to whip up some lyrics for an alphabet song, and then I used ACE-Step-v1-3.5B to generate a rock-style track at 105bpm.
Give it a listen – how does it sound to you?
My feeling is that some of the transitions are still a bit off, and there are issues with the pronunciation of individual lyrics. But on the whole, it's not bad! I reckon it'd be pretty smooth for making those catchy, repetitive tunes (like that "Shawarma Legend" kind of vibe).
This was generated on HuggingFace, took about 50 seconds.
What are your thoughts?
| 2025-05-07T19:15:23 |
https://v.redd.it/dfm1hq67teze1
|
Dr_Karminski
|
/r/LocalLLaMA/comments/1kh5vrx/trying_out_the_acestep_song_generation_model/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5vrx
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dfm1hq67teze1/DASHPlaylist.mpd?a=1749366927%2CZDg0YTM0NjcyNzI1OGQwNDg4ZTViOTE4ZjdhNDMxY2I5ZTM1N2Y4YzJlNmVkYzZjYWNkNmZhOTA4NDI2YzFiYQ%3D%3D&v=1&f=sd', 'duration': 120, 'fallback_url': 'https://v.redd.it/dfm1hq67teze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/dfm1hq67teze1/HLSPlaylist.m3u8?a=1749366927%2COTg4M2IyNjNhYTVlYWZmMWQwNzQxMGM0ZDgyYzkzODY0NTI1ODYwNGY5YjQ5ODQ2NjdjNGVlOWIzNzlhMzQ0NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dfm1hq67teze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kh5vrx
|
/r/LocalLLaMA/comments/1kh5vrx/trying_out_the_acestep_song_generation_model/
| false | false | 38 |
{'enabled': False, 'images': [{'id': 'bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b11684088b8e29ffe83b8e7fc5c4fe83a128752', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=216&crop=smart&format=pjpg&auto=webp&s=2c957e86a46ecf5358f143ae2b501ba05b5edf37', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=320&crop=smart&format=pjpg&auto=webp&s=3b7ff9b1340ad4f1ab4e68ca7ecb1c730e6cfb01', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=640&crop=smart&format=pjpg&auto=webp&s=67122777aea00bfc8efca9b25342dd9d865befff', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=960&crop=smart&format=pjpg&auto=webp&s=3389abbc47fa55843cfd483460761b76f269c924', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0c5389c2c5cc5ac260f3a0c30eee262be95941de', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bDY3Zm5xNjd0ZXplMfhj0slUHjrXE-UilZ0dRUIJzmh3kn39RuiWSQcdWvp9.png?format=pjpg&auto=webp&s=9e6bdbfee0414afbf5362618bdac71e4346e1c57', 'width': 1920}, 'variants': {}}]}
|
|
ChatGPT is sharing data with Grok huh?
| 1 |
[removed]
| 2025-05-07T19:15:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh5vwo/chatgpt_is_sharing_data_with_grok_huh/
|
xtended2l
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5vwo
| false | null |
t3_1kh5vwo
|
/r/LocalLLaMA/comments/1kh5vwo/chatgpt_is_sharing_data_with_grok_huh/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4.png?width=108&crop=smart&auto=webp&s=0ab9177035d70b697d1b08a6689b8484f5817153', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4.png?width=216&crop=smart&auto=webp&s=7c2d86f98215283cf9ec46a583eb428d97d9a227', 'width': 216}, {'height': 210, 'url': 'https://external-preview.redd.it/ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4.png?width=320&crop=smart&auto=webp&s=6078309e8e44d8d8e8cb5be19229a674f57828bc', 'width': 320}, {'height': 421, 'url': 'https://external-preview.redd.it/ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4.png?width=640&crop=smart&auto=webp&s=202f2b4bb01a8fef6dca2652f8b31e0abd6964e4', 'width': 640}], 'source': {'height': 536, 'url': 'https://external-preview.redd.it/ONFpNaSg01wr9FsfkFfQs6RgBufYJT0hrGlhV_m-rM4.png?auto=webp&s=4125cad8a7cba6576646f4d2491a9db24ccf0af3', 'width': 814}, 'variants': {}}]}
|
|
Tiny Models, Local Throttles: Exploring My Local AI Dev Setup
| 0 |
Hi folks, I've been tinkering with local models for a few months now, and wrote a starter/setup guide to encourage more folks to do the same. Feedback and suggestions welcome.
What has your experience working with local SLMs been like?
| 2025-05-07T19:15:56 |
https://blog.nilenso.com/blog/2025/05/06/local-llm-setup/
|
kirang89
|
blog.nilenso.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5w9a
| false | null |
t3_1kh5w9a
|
/r/LocalLLaMA/comments/1kh5w9a/tiny_models_local_throttles_exploring_my_local_ai/
| false | false |
default
| 0 | null |
Are they sharing data? :)
| 1 |
[removed]
| 2025-05-07T19:17:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh5xmv/are_they_sharing_data/
|
xtended2l
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh5xmv
| false | null |
t3_1kh5xmv
|
/r/LocalLLaMA/comments/1kh5xmv/are_they_sharing_data/
| false | false |
self
| 1 | null |
Qwen3 MMLU-Pro Computer Science LLM Benchmark Results
| 92 |
Finally finished my extensive **Qwen 3 evaluations** across a range of formats and quantisations, focusing on **MMLU-Pro** (Computer Science).
A few take-aways stood out - especially for those interested in local deployment and performance trade-offs:
1. **Qwen3-235B-A22B** (via Fireworks API) tops the table at **83.66%** with \~55 tok/s.
2. But the **30B-A3B Unsloth** quant delivered **82.20%** while running locally at \~45 tok/s and with zero API spend.
3. The same Unsloth build is \~5x faster than Qwen's **Qwen3-32B**, which scores **82.20%** as well yet crawls at <10 tok/s.
4. On Apple silicon, the **30B MLX** port hits **79.51%** while sustaining \~64 tok/s - arguably today's best speed/quality trade-off for Mac setups.
5. The **0.6B** micro-model races above 180 tok/s but tops out at **37.56%** \- that's why it's not even on the graph (50 % performance cut-off).
All local runs were done with LM Studio on an M4 MacBook Pro, using Qwen's official recommended settings.
**Conclusion:** Quantised 30B models now get you \~98 % of frontier-class accuracy - at a fraction of the latency, cost, and energy. For most local RAG or agent workloads, they're not just good enough - they're the new default.
Well done, Alibaba/Qwen - you really whipped the llama's ass! And to OpenAI: for your upcoming open model, please make it MoE, with toggleable reasoning, and release it in many sizes. *This* is the future!
| 2025-05-07T19:43:27 |
WolframRavenwolf
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh6kh3
| false | null |
t3_1kh6kh3
|
/r/LocalLLaMA/comments/1kh6kh3/qwen3_mmlupro_computer_science_llm_benchmark/
| false | false |
default
| 92 |
{'enabled': True, 'images': [{'id': '3yuv5m5qxeze1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=108&crop=smart&auto=webp&s=34341791bb18538942f579d167e54c45d5bccf6b', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=216&crop=smart&auto=webp&s=eceda1ad7d8d2ec3abf3ba20bc29485261281f06', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=320&crop=smart&auto=webp&s=eb37b8e2328fcc99e90eebeb66dfdaba5d2e5237', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=640&crop=smart&auto=webp&s=3ed0dfa07bd3b2e9b138176b2104e26c7a51e6e4', 'width': 640}, {'height': 508, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=960&crop=smart&auto=webp&s=58754db5895a38a729f51648e769db0755dbcdf5', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?width=1080&crop=smart&auto=webp&s=c96629c05ec10e461ae63278c2a8c05dfd2bf020', 'width': 1080}], 'source': {'height': 2369, 'url': 'https://preview.redd.it/3yuv5m5qxeze1.png?auto=webp&s=dd43522d965d246207cd9bc1f2cb0498221b0613', 'width': 4471}, 'variants': {}}]}
|
|
Wait, can I abuse huggingface for storage for free??
| 0 |
I fine tune stupid models, like self aware or psychopath Gemma, or angry caps lock Gemma... They take a lot of space so I have to delete them. But why not just publish to huggingface? Can I flood them with terabytes of AI models for free??????
| 2025-05-07T19:51:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh6rie/wait_can_i_abuse_huggingface_for_storage_for_free/
|
Osama_Saba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh6rie
| false | null |
t3_1kh6rie
|
/r/LocalLLaMA/comments/1kh6rie/wait_can_i_abuse_huggingface_for_storage_for_free/
| false | false |
self
| 0 | null |
Best way to reconstruct .py file from several screenshots
| 0 |
I have several screenshots of some code files I would like to reconstruct.
I’m running open-webui as my frontend for Ollama
I understand that I will need some form of OCR and a model to interpret that and reconstruct the original file
Has anyone got experience of similar and if so, what models did you use?
| 2025-05-07T20:08:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh773p/best_way_to_reconstruct_py_file_from_several/
|
thetobesgeorge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh773p
| false | null |
t3_1kh773p
|
/r/LocalLLaMA/comments/1kh773p/best_way_to_reconstruct_py_file_from_several/
| false | false |
self
| 0 | null |
Kurdish Sorani TTS
| 0 |
Hi i found this great Kurdish Sorani TTS model for free!
Let me now what you think?
| 2025-05-07T20:09:55 |
https://www.kurdishtts.com
|
The_Heaven_Dragon
|
kurdishtts.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh782h
| false | null |
t3_1kh782h
|
/r/LocalLLaMA/comments/1kh782h/kurdish_sorani_tts/
| false | false |
default
| 0 | null |
I pretrained a 0.9b llm and open sourced it
| 1 |
[removed]
| 2025-05-07T20:29:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh7pn4/i_pretrained_a_09b_llm_and_open_sourced_it/
|
Level-Poem7037
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh7pn4
| false | null |
t3_1kh7pn4
|
/r/LocalLLaMA/comments/1kh7pn4/i_pretrained_a_09b_llm_and_open_sourced_it/
| false | false |
self
| 1 | null |
I pretrained a 0.9b llm and open sourced it
| 1 |
[removed]
| 2025-05-07T20:31:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh7rck/i_pretrained_a_09b_llm_and_open_sourced_it/
|
Level-Poem7037
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh7rck
| false | null |
t3_1kh7rck
|
/r/LocalLLaMA/comments/1kh7rck/i_pretrained_a_09b_llm_and_open_sourced_it/
| false | false |
self
| 1 | null |
I trained a llm end to end from scratch.
| 1 |
[removed]
| 2025-05-07T20:37:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh7wrm/i_trained_a_llm_end_to_end_from_scratch/
|
Level-Poem7037
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh7wrm
| false | null |
t3_1kh7wrm
|
/r/LocalLLaMA/comments/1kh7wrm/i_trained_a_llm_end_to_end_from_scratch/
| false | false |
self
| 1 | null |
New benchmark for guard models?
| 1 |
[removed]
| 2025-05-07T20:38:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh7x9e/new_benchmark_for_guard_models/
|
ARCHLucifer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh7x9e
| false | null |
t3_1kh7x9e
|
/r/LocalLLaMA/comments/1kh7x9e/new_benchmark_for_guard_models/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '7fEzEYCsqPITn5Q8tLBIH4SJDuWqPUyr1ft7qzXnhL0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=108&crop=smart&auto=webp&s=ed4296fa0a53aed10d664ea7f0c26cc6a10547f8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=216&crop=smart&auto=webp&s=349273d83a3da143758ac2e4a782ee187680e210', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=320&crop=smart&auto=webp&s=5c62458b1f6507a8478fb34a980214794ef1366e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=640&crop=smart&auto=webp&s=e745d75cd015b904b3553c474a05465b227273b5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=960&crop=smart&auto=webp&s=39d79c579dcbb3114b096dd16430a401afa6aae0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?width=1080&crop=smart&auto=webp&s=07b64a1c716fc245d19afd64c08d452f314c2f3e', 'width': 1080}], 'source': {'height': 1152, 'url': 'https://external-preview.redd.it/dhaC748k1Uj0UPyOIdz7NqNmjlNGbQZ8zReZPK6EXtE.jpg?auto=webp&s=c480ff834cc53775f119c0dae4973a0ae417bbd5', 'width': 2048}, 'variants': {}}]}
|
I trained a llm from scratch.
| 1 |
[removed]
| 2025-05-07T20:40:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh7zmd/i_trained_a_llm_from_scratch/
|
Level-Poem7037
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh7zmd
| false | null |
t3_1kh7zmd
|
/r/LocalLLaMA/comments/1kh7zmd/i_trained_a_llm_from_scratch/
| false | false |
self
| 1 | null |
New benchmark for guard models?
| 1 |
[removed]
| 2025-05-07T20:41:25 |
https://x.com/whitecircle_ai/status/1920094991960997998
|
ARCHLucifer
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh802x
| false | null |
t3_1kh802x
|
/r/LocalLLaMA/comments/1kh802x/new_benchmark_for_guard_models/
| false | false |
default
| 1 | null |
dyad: free, local open-source lovable/bolt/v0 alternative - now with LM Studio support!
| 1 |
[removed]
| 2025-05-07T20:45:58 |
https://v.redd.it/tcd4plvm8fze1
|
wwwillchen
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh843c
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tcd4plvm8fze1/DASHPlaylist.mpd?a=1749242771%2CMTFjMjdmNWUxNGI1MjY4MTVlYWQ2MTcxMGE2NWVkYzNlOTJkYWM4MDExOTQ1M2Q3ZDVkZjBlYTU2MGI4Zjg0NQ%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/tcd4plvm8fze1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/tcd4plvm8fze1/HLSPlaylist.m3u8?a=1749242771%2COWVmNTUyODVkMTAxYThmNTI4MTllNzMyNDg1NTZkOGMyMDFiMzk5Y2FmNzhlNTU2MWUyZjExZmRlN2JhZWE0Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tcd4plvm8fze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1664}}
|
t3_1kh843c
|
/r/LocalLLaMA/comments/1kh843c/dyad_free_local_opensource_lovableboltv0/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=108&crop=smart&format=pjpg&auto=webp&s=111fa8f1b8255c31625af89fff222d82bab43bd3', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=216&crop=smart&format=pjpg&auto=webp&s=de6d5c9c35a5f031226bdfbd1136ac4ff8c2e804', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=320&crop=smart&format=pjpg&auto=webp&s=3197f8d9dac54faa9bb506da2cbbca341b775cdf', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=640&crop=smart&format=pjpg&auto=webp&s=88bb7e85d26a7f3d76375b36eefd2c6d1778da14', 'width': 640}, {'height': 623, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=960&crop=smart&format=pjpg&auto=webp&s=78b97adc232ae02736a0fbf16b231f959bcacc7c', 'width': 960}, {'height': 700, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7e52194c0896eb365bd3d680fc6bf47a44fea8ee', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YTE2ZHRtdm04ZnplMalSRdSjZDk5Owp7iJ9s16GSC2RQbLSz7BDzGRuwxwcR.png?format=pjpg&auto=webp&s=593c2bdc9ab34958a051bf0ea25929f6af7fbf48', 'width': 1664}, 'variants': {}}]}
|
|
What would be the level of a SOTA "Indie" LLM?
| 1 |
[removed]
| 2025-05-07T21:11:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh8qvj/what_would_be_the_level_of_a_sota_indie_llm/
|
sebastianmicu24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh8qvj
| false | null |
t3_1kh8qvj
|
/r/LocalLLaMA/comments/1kh8qvj/what_would_be_the_level_of_a_sota_indie_llm/
| false | false |
self
| 1 | null |
What is the current SOTA LLM trained from scratch only on consumer hardware, and what size does it have?
| 1 |
[removed]
| 2025-05-07T21:13:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh8ss4/what_is_the_current_sota_llm_trained_from_scratch/
|
sebastianmicu24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh8ss4
| false | null |
t3_1kh8ss4
|
/r/LocalLLaMA/comments/1kh8ss4/what_is_the_current_sota_llm_trained_from_scratch/
| false | false |
self
| 1 | null |
What is the current SOTA LLM trained from scratch only on consumer hardware, and what size does it have?
| 1 |
[removed]
| 2025-05-07T21:15:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh8u9k/what_is_the_current_sota_llm_trained_from_scratch/
|
sebastianmicu24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh8u9k
| false | null |
t3_1kh8u9k
|
/r/LocalLLaMA/comments/1kh8u9k/what_is_the_current_sota_llm_trained_from_scratch/
| false | false |
self
| 1 | null |
What is the current SOTA LLM trained from scratch only on consumer hardware, and what size does it have?
| 1 |
[removed]
| 2025-05-07T21:17:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh8wal/what_is_the_current_sota_llm_trained_from_scratch/
|
sebastianmicu24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh8wal
| false | null |
t3_1kh8wal
|
/r/LocalLLaMA/comments/1kh8wal/what_is_the_current_sota_llm_trained_from_scratch/
| false | false |
self
| 1 | null |
Claude instructions
| 1 |
[removed]
| 2025-05-07T21:21:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh8z6u/claude_instructions/
|
mukhayy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh8z6u
| false | null |
t3_1kh8z6u
|
/r/LocalLLaMA/comments/1kh8z6u/claude_instructions/
| false | false |
self
| 1 | null |
OpenCodeReasoning - new Nemotrons by NVIDIA
| 113 |
[https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-7B)
[https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-14B](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-14B)
[https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B)
[https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B-IOI](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B-IOI)
| 2025-05-07T21:22:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kh9018/opencodereasoning_new_nemotrons_by_nvidia/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh9018
| false | null |
t3_1kh9018
|
/r/LocalLLaMA/comments/1kh9018/opencodereasoning_new_nemotrons_by_nvidia/
| false | false |
self
| 113 |
{'enabled': False, 'images': [{'id': 't3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=108&crop=smart&auto=webp&s=9bb4fcc2dd9bc805edb892ed8aa35a26e875dd41', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=216&crop=smart&auto=webp&s=364eae5ba236df68dcdcd1f1490823108604c571', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=320&crop=smart&auto=webp&s=43259e63a7faa67254ff4409fba8f71e722d5027', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=640&crop=smart&auto=webp&s=8313a29fa8c9d8265c20b20a8a74f00c091a3df4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=960&crop=smart&auto=webp&s=adcc01ab1247dc137a5dc7cbf9665be433fb643e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?width=1080&crop=smart&auto=webp&s=3fa187bf1b029459ce1c7ef72f29146b176f3385', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/t3wLah19cN2xM2clJ19aY4ht6O_Ub2nxBJqXaR__Yko.png?auto=webp&s=9a894fec1e25c785fc2bf321c87e25c36b72bcc6', 'width': 1200}, 'variants': {}}]}
|
Collection of LLM System Prompts
| 27 | 2025-05-07T21:34:34 |
https://github.com/guy915/LLM-System-Prompts
|
Haunting-Stretch8069
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh9ape
| false | null |
t3_1kh9ape
|
/r/LocalLLaMA/comments/1kh9ape/collection_of_llm_system_prompts/
| false | false |
default
| 27 |
{'enabled': False, 'images': [{'id': 'mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=108&crop=smart&auto=webp&s=8f82d3dcf2c2c8ffd2aceab50aec636e2b23b033', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=216&crop=smart&auto=webp&s=a694f3ed1324d930fdea86236c5356d9d1de958e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=320&crop=smart&auto=webp&s=535058a7b49b2f11009d40d7a93bca8b74950e05', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=640&crop=smart&auto=webp&s=06835e2d945227966f4734fc5037151786b32e51', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=960&crop=smart&auto=webp&s=444505ce6721c3e2970023e4c3752e6fb53f0745', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?width=1080&crop=smart&auto=webp&s=f181a21ba2821bb78827c8c5f546cc653ee53f10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mXIp6Z--Pl_AS_wcxpDFLh4ENIm93uSILhbV_jZ63CM.png?auto=webp&s=8babab5e259e3309149b263af3233f5920c733ba', 'width': 1200}, 'variants': {}}]}
|
|
No local, no care.
| 527 | 2025-05-07T21:53:52 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kh9qlx
| false | null |
t3_1kh9qlx
|
/r/LocalLLaMA/comments/1kh9qlx/no_local_no_care/
| false | false | 527 |
{'enabled': True, 'images': [{'id': 'H06-PMBOJ8CbSXbg5L2BFSgQDfnn84px0AekxHE6QZg', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?width=108&crop=smart&auto=webp&s=4ae5e5796b9c4b8a194bf3f64f9dbeb83c8884b3', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?width=216&crop=smart&auto=webp&s=6a90564009c2baca7cdcbe5b322050062e22d78f', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?width=320&crop=smart&auto=webp&s=44d7c19f4b8f49f355a8cdaaabefa405a8eae8b9', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?width=640&crop=smart&auto=webp&s=6ceb732f3829a0007aaaa683f507cd9116cadc51', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?width=960&crop=smart&auto=webp&s=9988826c22e1c7f51a900b2ef4e2cb0a4cce8b18', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/f0l4hjmklfze1.jpeg?auto=webp&s=2f1a68f88bb3394e8330489e0d1aa07abd082ec9', 'width': 1024}, 'variants': {}}]}
|
|||
Any suggestion on LLM servers for very high load? (+200 every 5 seconds)
| 1 |
[removed]
| 2025-05-07T22:05:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kha09c/any_suggestion_on_llm_servers_for_very_high_load/
|
Ok_Material_1700
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kha09c
| false | null |
t3_1kha09c
|
/r/LocalLLaMA/comments/1kha09c/any_suggestion_on_llm_servers_for_very_high_load/
| false | false |
self
| 1 | null |
LLMs Suck at Long Context (Maybe except Gemini)! OpenAI-MRCR Benchmark. Results for 8 needles.
| 1 |
[removed]
| 2025-05-07T22:55:05 |
lordpermaximum
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khb45g
| false | null |
t3_1khb45g
|
/r/LocalLLaMA/comments/1khb45g/llms_suck_at_long_context_maybe_except_gemini/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'aj9b20pfwfze1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=108&crop=smart&auto=webp&s=b1feb8530e88b3c6439dce907c097b7116481a72', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=216&crop=smart&auto=webp&s=f6e99b1a9f61b1f40fdf546760aedfeea60a345a', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=320&crop=smart&auto=webp&s=9fe8a7dda2924d685fdd6a404a556919f106785d', 'width': 320}, {'height': 201, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=640&crop=smart&auto=webp&s=f93c9e46d87a5f9b28536d7e8033b68ab55c5875', 'width': 640}, {'height': 302, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=960&crop=smart&auto=webp&s=a690060affad273f094b268b149aa7a2184b1b0c', 'width': 960}, {'height': 340, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?width=1080&crop=smart&auto=webp&s=814eb254dfe92beb18759781a97e630db25714da', 'width': 1080}], 'source': {'height': 495, 'url': 'https://preview.redd.it/aj9b20pfwfze1.png?auto=webp&s=fb15b1070797c26f82cda7afea9329717a11b162', 'width': 1570}, 'variants': {}}]}
|
|
LLMs suck at long context (maybe except Gemini). OpenAI-MRCR Benchmark Results for 8 needles!
| 1 | 2025-05-07T22:56:21 |
lordpermaximum
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khb57j
| false | null |
t3_1khb57j
|
/r/LocalLLaMA/comments/1khb57j/llms_suck_at_long_context_maybe_except_gemini/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': '6popmkglwfze1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=108&crop=smart&auto=webp&s=e0b9fdf1933f629ebf00026bad7430e4e2fbd0c2', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=216&crop=smart&auto=webp&s=1fa9e3ceb07810c063dc7a0ddf3a10fdddcd2c4c', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=320&crop=smart&auto=webp&s=b7042dea9684031bdd5f88eca0cd25295da75cf0', 'width': 320}, {'height': 201, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=640&crop=smart&auto=webp&s=ffe8e0c2c4c286a30720454e91c46d91f6283323', 'width': 640}, {'height': 302, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=960&crop=smart&auto=webp&s=a2171653a6226d6dc144ea646effb14bf802d667', 'width': 960}, {'height': 340, 'url': 'https://preview.redd.it/6popmkglwfze1.png?width=1080&crop=smart&auto=webp&s=1601caf64fd27612128ca96282bd145ed79dd4e6', 'width': 1080}], 'source': {'height': 495, 'url': 'https://preview.redd.it/6popmkglwfze1.png?auto=webp&s=f2026b9aea4000cdeafe8ef53641d15a15773271', 'width': 1570}, 'variants': {}}]}
|
||
The new MLX DWQ quant is underrated, it feels like 8bit in a 4bit quant.
| 66 |
I noticed it was added to MLX a few days ago and started using it since then. It's very impressive, like running an 8bit model in a 4bit quantization size without much performance loss, and I suspect it might even finally make the 3bit quantization usable.
[https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ](https://huggingface.co/mlx-community/Qwen3-30B-A3B-4bit-DWQ)
| 2025-05-07T22:59:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1khb7rs/the_new_mlx_dwq_quant_is_underrated_it_feels_like/
|
mzbacd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khb7rs
| false | null |
t3_1khb7rs
|
/r/LocalLLaMA/comments/1khb7rs/the_new_mlx_dwq_quant_is_underrated_it_feels_like/
| false | false |
self
| 66 |
{'enabled': False, 'images': [{'id': 'T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=108&crop=smart&auto=webp&s=4c3a27177b1e75d13501b7a2eea08cb90c5877e7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=216&crop=smart&auto=webp&s=6ba038e54fdbfb92b34f129f26b37873599f4661', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=320&crop=smart&auto=webp&s=e39b7f61e7f01d67d1b3190c1d418e5e6934968b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=640&crop=smart&auto=webp&s=a0ff1b0bddbe0e81dee900d79d435459c5897596', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=960&crop=smart&auto=webp&s=ec8496ae296926043e2e5d98af74d1ff446d7a22', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?width=1080&crop=smart&auto=webp&s=c7c22dc73cf9453084f8698b0cba11d040958864', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T-mAML-sjn8qyDSC6vCztVVkQH52ox4_tv7ke4zP9ic.png?auto=webp&s=764c3a31ffdacff9cb940380d5e0e5d080240ade', 'width': 1200}, 'variants': {}}]}
|
Making people look older + video
| 1 |
[removed]
| 2025-05-07T23:17:18 |
https://www.youtube.com/shorts/lN6kRc6WkCU
|
AndyAnalyzes
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbl7p
| false |
{'oembed': {'author_name': 'Soy Andrew Ramirez', 'author_url': 'https://www.youtube.com/@SoyAndrewRamirez', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/lN6kRc6WkCU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="¿Como lucirían si estuvieran con nosotros? #viral #elvispresley #freddiemercury #whitneyhouston"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/lN6kRc6WkCU/hq2.jpg', 'thumbnail_width': 480, 'title': '¿Como lucirían si estuvieran con nosotros? #viral #elvispresley #freddiemercury #whitneyhouston', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'}
|
t3_1khbl7p
|
/r/LocalLLaMA/comments/1khbl7p/making_people_look_older_video/
| false | false |
default
| 1 | null |
Will a 3x RTX 3090 Setup a Good Bet for AI Workloads and Training Beyond 2028?
| 1 |
[removed]
| 2025-05-07T23:18:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1khbls9/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
|
Spare_Flounder_6865
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbls9
| false | null |
t3_1khbls9
|
/r/LocalLLaMA/comments/1khbls9/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
| false | false |
self
| 1 | null |
Dual AMD Mi50 Inference and Performance
| 1 |
[removed]
| 2025-05-07T23:18:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1khbm3e/dual_amd_mi50_inference_and_performance/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbm3e
| false | null |
t3_1khbm3e
|
/r/LocalLLaMA/comments/1khbm3e/dual_amd_mi50_inference_and_performance/
| false | false |
self
| 1 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-07T23:33:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1khbx4m/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbx4m
| false | null |
t3_1khbx4m
|
/r/LocalLLaMA/comments/1khbx4m/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=108&crop=smart&auto=webp&s=1a4bef0788cf677e51e7e9eaf4bbcdcc09552954', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=216&crop=smart&auto=webp&s=eafe25f3a84b306665ed0d86e4d26d80a37464e6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=320&crop=smart&auto=webp&s=502c33bcbfbe88f7a906a4c8fb6fb7fbf8a6cc12', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?auto=webp&s=afec26e03c5bab5bd6f74c40b5446f4f337d4ff4', 'width': 512}, 'variants': {}}]}
|
QwQ Appreciation Thread
| 63 | 2025-05-07T23:33:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1khbxg4/qwq_appreciation_thread/
|
OmarBessa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbxg4
| false | null |
t3_1khbxg4
|
/r/LocalLLaMA/comments/1khbxg4/qwq_appreciation_thread/
| false | false | 63 |
{'enabled': False, 'images': [{'id': 'iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=108&crop=smart&auto=webp&s=7a3cd9808bdc8f66a394bc9e6c6abaedaa3724bc', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=216&crop=smart&auto=webp&s=a0bc4dcb7de136e1c8a8bfb5109a567eed49bce0', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=320&crop=smart&auto=webp&s=4d55cf7f9471c4af6a5ee533e41395944a7def78', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=640&crop=smart&auto=webp&s=c1c8377a20d6ad8107b227ddbef333fbae642705', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=960&crop=smart&auto=webp&s=d40b71f6cfd707d3c888f3400f3491b92e72dac0', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?width=1080&crop=smart&auto=webp&s=e8a3f8172cbbb5b3216ccac5f54d134a1fa3eadb', 'width': 1080}], 'source': {'height': 868, 'url': 'https://external-preview.redd.it/iUbtHN7RzxrcJ1LnOytJyYZIsd6RNnT0J4eou-hgYFg.png?auto=webp&s=e34ca0eb259292c22e41dda464676724db5f3c19', 'width': 1257}, 'variants': {}}]}
|
||
Intel to announce new Intel Arc Pro GPUs at Computex 2025 (May 20-23)
| 184 |
Maybe the 24 GB Arc B580 model that got leaked will be announced?
| 2025-05-07T23:35:56 |
https://x.com/intel/status/1920241029804064796
|
eding42
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khbz70
| false | null |
t3_1khbz70
|
/r/LocalLLaMA/comments/1khbz70/intel_to_announce_new_intel_arc_pro_gpus_at/
| false | false |
default
| 184 |
{'enabled': False, 'images': [{'id': 'Ov3HyRYL22ThGfqR_RssVUyI-24spNEEXqWX3Ayhmg8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=108&crop=smart&auto=webp&s=bdafeb12e42f7afb79e3365624d1f955c8e44181', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=216&crop=smart&auto=webp&s=513f02c9d07ac4be229db46bb42d69c466a6431e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=320&crop=smart&auto=webp&s=9d3387e2bc9198ef05082beea0b7f5c1c7ff5f90', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=640&crop=smart&auto=webp&s=4675a9fdccab6a8f9da5be381fa2c1bd1fe534bf', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=960&crop=smart&auto=webp&s=0b8982229e2f213b8b5a604a3851a56461c794bc', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?width=1080&crop=smart&auto=webp&s=cca5b9099c0ac2113cf1a1b00f2d2263056c096a', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/qU1K5b6t8nYIjVW3n7kpmgcB3rS2YYANfyJcs8RztyA.jpg?auto=webp&s=8dfc5cb8b108e1cc7ca83eb9592bce0606989ce4', 'width': 2048}, 'variants': {}}]}
|
Will a 3x RTX 3090 Setup a Good Bet for AI Workloads and Training Beyond 2028?
| 1 |
[removed]
| 2025-05-07T23:49:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1khc95y/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
|
Spare_Flounder_6865
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khc95y
| false | null |
t3_1khc95y
|
/r/LocalLLaMA/comments/1khc95y/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
| false | false |
self
| 1 | null |
help with wich model i choose
| 1 |
[removed]
| 2025-05-08T00:25:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1khczjb/help_with_wich_model_i_choose/
|
almarssad
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khczjb
| false | null |
t3_1khczjb
|
/r/LocalLLaMA/comments/1khczjb/help_with_wich_model_i_choose/
| false | false |
self
| 1 | null |
Will a 3x RTX 3090 Setup a Good Bet for AI Workloads and Training Beyond 2028?
| 1 |
[removed]
| 2025-05-08T01:11:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1khdwpg/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
|
Spare_Flounder_6865
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khdwpg
| false | null |
t3_1khdwpg
|
/r/LocalLLaMA/comments/1khdwpg/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
| false | false |
self
| 1 | null |
I'm building an AI Factory program - should I release it?
| 1 |
[removed]
| 2025-05-08T01:47:51 |
https://www.reddit.com/gallery/1khemaf
|
Historical-Singer771
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khemaf
| false | null |
t3_1khemaf
|
/r/LocalLLaMA/comments/1khemaf/im_building_an_ai_factory_program_should_i/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=108&crop=smart&auto=webp&s=426e72e221fb2d170133a264cb60236f15a3dfa6', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=216&crop=smart&auto=webp&s=5fa019bcdfd8713c5b587edd83c154ba036278a2', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=320&crop=smart&auto=webp&s=f06519d6f8ce2042a7710256189705cfd254964a', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=640&crop=smart&auto=webp&s=6b922b318432e7a1a16d297c22d4e4e052ad1dcc', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=960&crop=smart&auto=webp&s=7473bf9b90bb3539efb3acaefbeecfe9473d3b64', 'width': 960}, {'height': 579, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?width=1080&crop=smart&auto=webp&s=ccea7085f8c6372a98fc4ab0d42094c985f536b0', 'width': 1080}], 'source': {'height': 1030, 'url': 'https://external-preview.redd.it/eV_SWjQ49JhqLwWQZzX9IinEXgM_aklezQFs_UZYvyA.png?auto=webp&s=69dd9e1b994673eb045c488f8c0b9403b20b9269', 'width': 1920}, 'variants': {}}]}
|
|
Easiest way to test computer use?
| 4 |
I wanted to quickly test if AI could do a small computer use task but there's no real way to do this quickly?
* Claude Computer Use is specifically designed to be used in Docker in virtualised envs. I just want to test something on my local mac
* OpenAI's Operator is expensive so it's not viable
* I tried setting up an endpoint for UI-TARS in HuggingFace and using it inside the UI-TARS app but kept getting a "Error: 404 status code (no body)
Is there no app or repo that will easily let you try computer use?
| 2025-05-08T02:07:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1khf0at/easiest_way_to_test_computer_use/
|
lostlifon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khf0at
| false | null |
t3_1khf0at
|
/r/LocalLLaMA/comments/1khf0at/easiest_way_to_test_computer_use/
| false | false |
self
| 4 | null |
Final verdict on LLM generated confidence scores?
| 15 |
I remember earlier hearing the confidence scores associated with a prediction from an LLM (e.g. classify XYZ text into A,B,C categories and provide a confidence score from 0-1) are gibberish and not really useful.
I see them used widely though and have since seen some mixed opinions on the idea.
While the scores are not useful in the same way a propensity is (after all it’s just tokens), they are still indicative of some sort of confidence
I’ve also seen that using qualitative confidence e.g. Level of confidence: low, medium, high, is better than using numbers.
Just wondering what’s the latest school of thought on this and whether in practice you are using confidence scores in this way, and your observations about them?
| 2025-05-08T02:32:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1khfhoh/final_verdict_on_llm_generated_confidence_scores/
|
sg6128
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khfhoh
| false | null |
t3_1khfhoh
|
/r/LocalLLaMA/comments/1khfhoh/final_verdict_on_llm_generated_confidence_scores/
| false | false |
self
| 15 | null |
HF Model Feedback
| 8 |
Hi everyone,
I've recently upgraded to **HF Enterprise** to access more detailed analytics for my models. While this gave me some valuable insights, it also highlighted a significant **gap** in the way model feedback works on the platform.
Particularly, the lack of **direct communication** between model providers and users.
After uploading models to the HuggingFace hub, providers are **disintermediated from the users**. You lose visibility into how your models are being used and whether they’re performing as expected in real-world environments. We can see download counts, but these numbers don’t tell us if the model is facing any issues we can try to fix in the next update.
I just discovered this firsthand after noticing **spikes in downloads** for one of my older models. After digging into the data, I learned that these spikes correlated with **s**ome recent posts in r/LocalLlama, but there was no way for me to know in real-time that these conversations were driving traffic to my model. The system also doesn’t alert me when models start gaining traction or receiving high engagement.
**So how can creators get more visibility and actionable feedback? How can we understand the real-world performance of our models if we don’t have direct user insights?**
The Missing Piece: User-Contributed Feedback
What if we could address this issue by encouraging **users** to directly contribute **feedback** on models? I believe there’s a significant opportunity to improve the **open-source AI ecosystem** by creating a **feedback loop** where:
* **Users could share feedback** on how the model is performing for their specific use case.
* **Bug reports, performance issues, or improvement suggestions** could be logged directly on the model’s page, visible to both the creator and other users.
* **Ratings, comments, and usage examples** could be integrated to help future users understand the model's strengths and limitations.
These kinds of contributions would create a **feedback-driven ecosystem**, ensuring that model creators can get a better understanding of what’s working, what’s not, and where the model can be improved.
| 2025-05-08T02:35:52 |
remyxai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khfjrs
| false | null |
t3_1khfjrs
|
/r/LocalLLaMA/comments/1khfjrs/hf_model_feedback/
| false | false | 8 |
{'enabled': True, 'images': [{'id': 'BzHah-aRCeGkLwTa4ZOcbWyvzaJOxKpbeXktVOxRDxo', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/0qws1exawgze1.png?width=108&crop=smart&auto=webp&s=a866dd7815e58fc5196fddceec92be24cc23cc8d', 'width': 108}, {'height': 35, 'url': 'https://preview.redd.it/0qws1exawgze1.png?width=216&crop=smart&auto=webp&s=4a01b2cf98d43b7556673aff3c6266f4b4d752fd', 'width': 216}, {'height': 53, 'url': 'https://preview.redd.it/0qws1exawgze1.png?width=320&crop=smart&auto=webp&s=1d24b96763ba2af07c8246ca246590fc669d9589', 'width': 320}], 'source': {'height': 101, 'url': 'https://preview.redd.it/0qws1exawgze1.png?auto=webp&s=c27b4a0cc9a84f5ef19829f2691de33d1bcfae7a', 'width': 608}, 'variants': {}}]}
|
||
What is wrong with you people?
| 1 |
[removed]
| 2025-05-08T02:58:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1khfzcf/what_is_wrong_with_you_people/
|
hopepatrol
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khfzcf
| false | null |
t3_1khfzcf
|
/r/LocalLLaMA/comments/1khfzcf/what_is_wrong_with_you_people/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.