title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
the budget rig goes bigger, 5060tis bought! test results incoming tonight
| 31 |
well after my experiments with mining GPUs i was planning to build out my rig with some chinese modded 3080ti mobile cards with 16gb which came in at like £330 which at the time seemed a bargain. but then today i noticed the 5060i dropped at only £400 for 16gb! i was fully expecting to see them be £500 a card. luckily im very close to a major computer retailer so im heading to collect a pair of them this afternoon!
come back to this thread later for some info on how these things perform with LLMs. they could/should be an absolute bargain for local rigs
| 2025-04-16T13:53:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0kzgn/the_budget_rig_goes_bigger_5060tis_bought_test/
|
gaspoweredcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0kzgn
| false | null |
t3_1k0kzgn
|
/r/LocalLLaMA/comments/1k0kzgn/the_budget_rig_goes_bigger_5060tis_bought_test/
| false | false |
self
| 31 | null |
Livestream in o3 hours
| 0 |
*Processing img y5is8lz7e7ve1...*
| 2025-04-16T14:01:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0l5ug/livestream_in_o3_hours/
|
bymechul
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0l5ug
| false | null |
t3_1k0l5ug
|
/r/LocalLLaMA/comments/1k0l5ug/livestream_in_o3_hours/
| false | false | 0 | null |
|
Open AI wants to build anAI Social Media - we already did: It‘s a learning app that teaches you any topic via Feed
| 0 | 2025-04-16T14:01:45 |
I_am_unique6435
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0l644
| false | null |
t3_1k0l644
|
/r/LocalLLaMA/comments/1k0l644/open_ai_wants_to_build_anai_social_media_we/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '778Wrc2tIw5N9LNwFiOO2PiQuF3omDDkma79XTydw8Y', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=108&crop=smart&auto=webp&s=6ebb761a50968cd8dacfce8f25574b4e8fcfeda3', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=216&crop=smart&auto=webp&s=100b904aadc430c6bf63db47d7c768305ea00215', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=320&crop=smart&auto=webp&s=dfe6e99a967ae952074a2f70bebea559ea8ea4bb', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=640&crop=smart&auto=webp&s=1ef2eab043714ab77e6f095925f936f618996ef4', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=960&crop=smart&auto=webp&s=3ee70704c59a10c9c3e12b2786bc660836b976d8', 'width': 960}, {'height': 567, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?width=1080&crop=smart&auto=webp&s=d2466c4ef5791b3b5f0180812e2c813568661a32', 'width': 1080}], 'source': {'height': 1890, 'url': 'https://preview.redd.it/hyhev897e7ve1.jpeg?auto=webp&s=7e04bb432cd99b5f4d850ae9bbba134607136217', 'width': 3600}, 'variants': {}}]}
|
|||
How does character.ai achieve the consistency in narration? How can I replicate it locally?
| 10 |
I only recently found out about [character.ai](http://character.ai), and playing around with it it seems ok, not the best. Certainly room for improvement, but still. Considering the limited context, no embedding storage, no memories, the model does decently well for following with the system instructions.
It obviously seems that they are using just one model, and putting a different system prompt with different hyperparameters atop, but I never really got to this consistency in narration and whatnot locally. My question is, how did they do it? I refuse to believe that out of the millions of slop characters there, each one was actually meticulously crafted to work. It just makes more sense if they have some base template and then swap in whatever the creator said.
Maybe I'm doing something wrong or what, but I could never get a system prompt to consistently follow through in the style and being able to separate well enough the actual things "said" vs \\\*thought\\\* or whatever the stars are for, or for just staying in it's role and playing as one character and not trying to play for the other one too. What's the secret sauce? I feel like getting quality to go up is a somewhat simple task after that.
| 2025-04-16T14:03:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0l7k5/how_does_characterai_achieve_the_consistency_in/
|
Tripel_Meow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0l7k5
| false | null |
t3_1k0l7k5
|
/r/LocalLLaMA/comments/1k0l7k5/how_does_characterai_achieve_the_consistency_in/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=108&crop=smart&auto=webp&s=23183dce45b8759af44dc45578bcd60d1883477a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=216&crop=smart&auto=webp&s=52091792582b6a74d0a7f4cce12d173a32a79716', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=320&crop=smart&auto=webp&s=5b0a456015d02e783fc787f594e54fe0e969ea15', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=640&crop=smart&auto=webp&s=61fb8046c762f14e0e07ea500d1ad85ab8481ee2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=960&crop=smart&auto=webp&s=831e1b06425cd4ca7928aaf4f90c1adacf6854d6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=1080&crop=smart&auto=webp&s=d6c0ba0fc918c425682b1427ac6210ee38973a76', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?auto=webp&s=91bd92d61d32d6d820ca8c34b2eaea08283a75d5', 'width': 1200}, 'variants': {}}]}
|
Best option for Q&A chatbot trained with internal company data
| 1 |
[removed]
| 2025-04-16T14:09:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0lc85/best_option_for_qa_chatbot_trained_with_internal/
|
Filmboycr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0lc85
| false | null |
t3_1k0lc85
|
/r/LocalLLaMA/comments/1k0lc85/best_option_for_qa_chatbot_trained_with_internal/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'oPpz0skggl6FucgJq68Aig_yMAhLIgPewsYwHhGrNp8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UallS6eTqMQ6Q_VovzOpEJke6ZgDb6GzqRJtFsGug9s.jpg?width=108&crop=smart&auto=webp&s=9dbd384757d51b116ca502abd7e3e7cad21b53ca', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/UallS6eTqMQ6Q_VovzOpEJke6ZgDb6GzqRJtFsGug9s.jpg?width=216&crop=smart&auto=webp&s=a30a774670c6db136f26de6f589e265546bcc534', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/UallS6eTqMQ6Q_VovzOpEJke6ZgDb6GzqRJtFsGug9s.jpg?width=320&crop=smart&auto=webp&s=cd69aed0fed1272a9c5c9d2807126b18ad5f12c5', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/UallS6eTqMQ6Q_VovzOpEJke6ZgDb6GzqRJtFsGug9s.jpg?width=640&crop=smart&auto=webp&s=82f0fba791dd0f75899e3dc5480f915fe8c0cafd', 'width': 640}], 'source': {'height': 418, 'url': 'https://external-preview.redd.it/UallS6eTqMQ6Q_VovzOpEJke6ZgDb6GzqRJtFsGug9s.jpg?auto=webp&s=814679e38c2cc3dc7e98d0d809a34e91af894a5e', 'width': 800}, 'variants': {}}]}
|
What model should I use? Hermes, Nous-Hermes or Wizard 1.2V
| 1 |
[removed]
| 2025-04-16T14:17:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0liz8/what_model_should_i_use_hermes_noushermes_or/
|
Gullible_Pipe_2177
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0liz8
| false | null |
t3_1k0liz8
|
/r/LocalLLaMA/comments/1k0liz8/what_model_should_i_use_hermes_noushermes_or/
| false | false |
self
| 1 | null |
IBM Granite 3.3 Models
| 2 |
* [3.3 Speech Model](https://huggingface.co/ibm-granite/granite-speech-3.3-8b)
* [Announcement post](https://www.ibm.com/new/announcements/ibm-granite-3-3-speech-recognition-refined-reasoning-rag-loras)
| 2025-04-16T14:37:17 |
https://huggingface.co/collections/ibm-granite/granite-33-language-models-67f65d0cca24bcbd1d3a08e3
|
suitable_cowboy
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0lziq
| false | null |
t3_1k0lziq
|
/r/LocalLLaMA/comments/1k0lziq/ibm_granite_33_models/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'nb2kdexmleHkD1q9guwjeIYgyzqmsygobtDJn8wAlrs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=108&crop=smart&auto=webp&s=42b7cfa3ddf9ef91b5fa06645fbb49646a62ad91', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=216&crop=smart&auto=webp&s=9ece2354a14b43e802cf476b4e4a49ff87c10f09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=320&crop=smart&auto=webp&s=70e36b514fd1bf3a3399fa4f3c048b95d9164505', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=640&crop=smart&auto=webp&s=cea64ef12850fb58da12ba852867d09166207a09', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=960&crop=smart&auto=webp&s=4ed4fba75288567dd9ee43b3148b76b901e7aae1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=1080&crop=smart&auto=webp&s=8a53e04dc1fea68f84b25e28be5664fca4945091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?auto=webp&s=05064fb13fe3c925bbf8298575d1b03ef407afe1', 'width': 1200}, 'variants': {}}]}
|
|
Pitch your favorite inference engine for low resource devices
| 1 |
[removed]
| 2025-04-16T14:41:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0m357/pitch_your_favorite_inference_engine_for_low/
|
batuhanaktass
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0m357
| false | null |
t3_1k0m357
|
/r/LocalLLaMA/comments/1k0m357/pitch_your_favorite_inference_engine_for_low/
| false | false |
self
| 1 | null |
Prompt needed.
| 1 |
[removed]
| 2025-04-16T14:47:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0m8a2/prompt_needed/
|
Ok-Consequence2625
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0m8a2
| false | null |
t3_1k0m8a2
|
/r/LocalLLaMA/comments/1k0m8a2/prompt_needed/
| false | false |
self
| 1 | null |
LLM translator for Fanfics or Novels with already names for anime, manga, cartoon and novel characters in it
| 1 |
[removed]
| 2025-04-16T14:54:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0mepv/llm_translator_for_fanfics_or_novels_with_already/
|
PedroHBN
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0mepv
| false | null |
t3_1k0mepv
|
/r/LocalLLaMA/comments/1k0mepv/llm_translator_for_fanfics_or_novels_with_already/
| false | false |
self
| 1 | null |
IBM Granite 3.3 Models
| 419 |
- [Announcement Post](https://www.ibm.com/new/announcements/ibm-granite-3-3-speech-recognition-refined-reasoning-rag-loras)
- [3.3 Speech Model](https://huggingface.co/ibm-granite/granite-speech-3.3-8b)
| 2025-04-16T14:54:48 |
https://huggingface.co/collections/ibm-granite/granite-33-language-models-67f65d0cca24bcbd1d3a08e3
|
suitable_cowboy
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0mesv
| false | null |
t3_1k0mesv
|
/r/LocalLLaMA/comments/1k0mesv/ibm_granite_33_models/
| false | false | 419 |
{'enabled': False, 'images': [{'id': 'nb2kdexmleHkD1q9guwjeIYgyzqmsygobtDJn8wAlrs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=108&crop=smart&auto=webp&s=42b7cfa3ddf9ef91b5fa06645fbb49646a62ad91', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=216&crop=smart&auto=webp&s=9ece2354a14b43e802cf476b4e4a49ff87c10f09', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=320&crop=smart&auto=webp&s=70e36b514fd1bf3a3399fa4f3c048b95d9164505', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=640&crop=smart&auto=webp&s=cea64ef12850fb58da12ba852867d09166207a09', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=960&crop=smart&auto=webp&s=4ed4fba75288567dd9ee43b3148b76b901e7aae1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?width=1080&crop=smart&auto=webp&s=8a53e04dc1fea68f84b25e28be5664fca4945091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Di-LJPiKH5-hlOr8JOzFQOIzNY3wtbEXkzZj38FaUy4.jpg?auto=webp&s=05064fb13fe3c925bbf8298575d1b03ef407afe1', 'width': 1200}, 'variants': {}}]}
|
|
did I get Google's A2A protocol right?
| 3 |
Hey folks,
I've been reading on some docs about Google'2 A2A protocol. From what I understand, MCP ( Model Context Protocol) gives your LLMs access to tools and external resources.
But I'm thinking of A2A more like a "delegation" method between agents that can "talk" to each other to find out about each other's capabilities and coordinate tasks accordingly.
I've seen some discussion around security of these protocols, very curious to learn what makes these protocols vulnerable from cybersecurity aspect ?
**What are your thoughts on A2A?**
| 2025-04-16T14:57:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0mhhh/did_i_get_googles_a2a_protocol_right/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0mhhh
| false | null |
t3_1k0mhhh
|
/r/LocalLLaMA/comments/1k0mhhh/did_i_get_googles_a2a_protocol_right/
| false | false |
self
| 3 | null |
Setting Power Limit on RTX 3090 – LLM Test
| 12 | 2025-04-16T15:09:42 |
https://youtu.be/4KzetHrFHAE
|
1BlueSpork
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0mrrt
| false |
{'oembed': {'author_name': 'BlueSpork', 'author_url': 'https://www.youtube.com/@BlueSpork', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4KzetHrFHAE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Setting Power Limit on RTX 3090 – LLM Test"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4KzetHrFHAE/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Setting Power Limit on RTX 3090 – LLM Test', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1k0mrrt
|
/r/LocalLLaMA/comments/1k0mrrt/setting_power_limit_on_rtx_3090_llm_test/
| false | false | 12 |
{'enabled': False, 'images': [{'id': 'hXt6-q565hHyJOoPsLywB2078fWX0deFN8UzQFJxnPg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Y9Fmw3qDWhZLIrYjGmc0j5sWzi_xA95bLGoO96c8w0g.jpg?width=108&crop=smart&auto=webp&s=ab13cedab5f07d2a69c696989b0323c7b6c75a2c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Y9Fmw3qDWhZLIrYjGmc0j5sWzi_xA95bLGoO96c8w0g.jpg?width=216&crop=smart&auto=webp&s=a94f9dd7f89e860a64f5e93581b0a6d12265b84a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Y9Fmw3qDWhZLIrYjGmc0j5sWzi_xA95bLGoO96c8w0g.jpg?width=320&crop=smart&auto=webp&s=394bf8a00fb680ac3ffcf8e9f923c9c01b9df50e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Y9Fmw3qDWhZLIrYjGmc0j5sWzi_xA95bLGoO96c8w0g.jpg?auto=webp&s=c4ddaf9578c05696c966382db051569a7b417ca4', 'width': 480}, 'variants': {}}]}
|
||
Tools/models for describing engineering drawings in scanned documents
| 1 |
[removed]
| 2025-04-16T15:20:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0n0z5/toolsmodels_for_describing_engineering_drawings/
|
Equivalent-Royal-844
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0n0z5
| false | null |
t3_1k0n0z5
|
/r/LocalLLaMA/comments/1k0n0z5/toolsmodels_for_describing_engineering_drawings/
| false | false |
self
| 1 | null |
Stuck with Whisper in Medical Transcription Project — No API via OpenWebUI?
| 0 |
Hey everyone,
I’m working on a local **Medical Transcription project** that uses **Ollama** to manage models. Things were going great until I decided to offload some of the heavy lifting (like running Whisper and LLaMA) to another computer with better specs. I got access to that machine through **OpenWebUI**, and LLaMA is working fine remotely.
**BUT... Whisper has no API endpoint in OpenWebUI**, and that’s where I’m stuck. I need to access Whisper programmatically from my main app, and right now there's just no clean way to do that via OpenWebUI.
A few questions I’m chewing on:
* Is there a workaround to expose Whisper as a separate API on the remote machine?
* Should I just run Whisper outside OpenWebUI and leave LLaMA inside?
* Anyone tackled something similar with a setup like this?
Any advice, workarounds, or pointers would be super appreciated.
| 2025-04-16T15:32:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0nbd5/stuck_with_whisper_in_medical_transcription/
|
IndependentFresh628
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0nbd5
| false | null |
t3_1k0nbd5
|
/r/LocalLLaMA/comments/1k0nbd5/stuck_with_whisper_in_medical_transcription/
| false | false |
self
| 0 | null |
It is almost May of 2025. What do you consider to be the best coding tools?
| 27 |
It is almost May of 2025. What do you consider to be the best coding tools?
I would like to get an organic assessment of the community’s choice of IDE and AI tools that successfully helps them in their programming projects.
I’m wondering how many people still use cursor, windsurf especially with the improvements of models vs cost progression over the past few months.
For the people that are into game development, what IDE helps your most for your game projects made in Unity/Godot etc.
Would love to hear everyone’s input.
As for me,
I’m currently find very consistent results in creating a vieriety of small programs with Python using cursor and Gemini 2.5. Before Gemini 2.5 came out, I was using 3.7 Claude, but was really debating with myself on if 3.7 was better than 3.5 as I was getting mixed results.
| 2025-04-16T15:58:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0nxlb/it_is_almost_may_of_2025_what_do_you_consider_to/
|
Material_Key7014
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0nxlb
| false | null |
t3_1k0nxlb
|
/r/LocalLLaMA/comments/1k0nxlb/it_is_almost_may_of_2025_what_do_you_consider_to/
| false | false |
self
| 27 | null |
5060ti 16GB for inference?
| 1 |
[removed]
| 2025-04-16T15:58:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0nxq0/5060ti_16gb_for_inference/
|
drazdra
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0nxq0
| false | null |
t3_1k0nxq0
|
/r/LocalLLaMA/comments/1k0nxq0/5060ti_16gb_for_inference/
| false | false |
self
| 1 | null |
KoboldCpp with Gemma 3 27b. Local vision has gotten pretty good I would say...
| 45 | 2025-04-16T16:16:39 |
Eisenstein
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0odhq
| false | null |
t3_1k0odhq
|
/r/LocalLLaMA/comments/1k0odhq/koboldcpp_with_gemma_3_27b_local_vision_has/
| false | false | 45 |
{'enabled': True, 'images': [{'id': 'aSeFBWxIGLxuwTvxEU2doe_ysDG3IMVWzvaJ1dmWdTA', 'resolutions': [{'height': 123, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?width=108&crop=smart&auto=webp&s=da79868f3aa46c295123cd6e17e0a6d1764cfc53', 'width': 108}, {'height': 246, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?width=216&crop=smart&auto=webp&s=84cf591fb22b1c21c63887ab862ff1480a8caa05', 'width': 216}, {'height': 365, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?width=320&crop=smart&auto=webp&s=1f37ae5f2bebaf114b5a7fa1cbd09d7fe86b2b62', 'width': 320}, {'height': 731, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?width=640&crop=smart&auto=webp&s=e7a4d66aac5a95d9f7e7eefde94e0ec3332c0946', 'width': 640}, {'height': 1097, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?width=960&crop=smart&auto=webp&s=e7602e3e1ad5d9d786f317ece3a99af85ad2cf25', 'width': 960}], 'source': {'height': 1176, 'url': 'https://external-preview.redd.it/E0QrtLGdAenlhx0dgrRxQhYXEHxRQVilnk0OkkkKL-M.png?auto=webp&s=81f4c3cde25e926686a20501529a4c64f4e2a190', 'width': 1029}, 'variants': {}}]}
|
|||
Best local visual llm for describing image?
| 5 |
Hello all, I am thinking of a fun project where I feed images into a visual llm that describes all contents as best as possible.
What would be the best local llm for this? Or when leader board/benchmark should I look at.
I have paid a lot more attention to text llms and not visual llms in the past so not sure where to start for the latest best ones.
Thanks!
| 2025-04-16T16:18:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0oeuw/best_local_visual_llm_for_describing_image/
|
mindwip
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0oeuw
| false | null |
t3_1k0oeuw
|
/r/LocalLLaMA/comments/1k0oeuw/best_local_visual_llm_for_describing_image/
| false | false |
self
| 5 | null |
Auto-Approve MCP Requests in the Claude App
| 0 | 2025-04-16T16:22:59 |
https://aplaceofmind.notion.site/Auto-Approve-MCP-Requests-in-the-Claude-App-1d70a6eeb81d808287eaf76cec81456d
|
nderstand2grow
|
aplaceofmind.notion.site
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0oizz
| false | null |
t3_1k0oizz
|
/r/LocalLLaMA/comments/1k0oizz/autoapprove_mcp_requests_in_the_claude_app/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '2bD23ZcoHovK-0kCdW66Vd_ovkrCSm3NfcMqbA0hO8A', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=108&crop=smart&auto=webp&s=5de774314629074ea435dc9969b72492d801670d', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=216&crop=smart&auto=webp&s=aa0e0c03ed2ce63b8b489eb51494e38d81948088', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=320&crop=smart&auto=webp&s=3c5ec8e7a10d585152edac5d4d24adbeaab565eb', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=640&crop=smart&auto=webp&s=4c15d5e4f8a48601745fbb30973b90a893fba876', 'width': 640}, {'height': 598, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=960&crop=smart&auto=webp&s=2e48feb848c855b6e9552e650806e963884a23b6', 'width': 960}, {'height': 673, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?width=1080&crop=smart&auto=webp&s=cb15b50e2406acd38265260bb6985818f5c9e3a5', 'width': 1080}], 'source': {'height': 1277, 'url': 'https://external-preview.redd.it/pE0zg542cvYv_YQXTlbuaG0qx6k85tU3W6OfTohAHyk.jpg?auto=webp&s=cb04c7fc2bfa3f4124a899c5692ff05a0f948fba', 'width': 2048}, 'variants': {}}]}
|
||
RTX 5090 now available on runpod.io
| 0 |
Just got this email:
||
||
|RunPod is now offering RTX 5090s—**and they’re unreal**. We’re seeing 65K+ tokens/sec in real-world inference benchmarks. That’s **2.5–3x faster than the A100**, making it the best value-per-watt card for LLM inference out there. Why this matters: If you’re building an app, chatbot, or copilot powered by large language models, you can now run more users, serve more responses, and reduce latency—all while lowering cost per token. This card is a gamechanger. Key takeaways:|
||
||
|**Supports LLaMA 3, Qwen2, Phi-3, DeepSeek-V3, and more** **Huge leap in speed: faster startup, shorter queues, less pod time** **Ideal for inference-focused deployment at scale**|
| 2025-04-16T16:28:22 |
chikengunya
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0onnw
| false | null |
t3_1k0onnw
|
/r/LocalLLaMA/comments/1k0onnw/rtx_5090_now_available_on_runpodio/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'D-GEXzLUZO1b-3-tCeMDOz79Zwe24oZRx3s4uXhu69U', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=108&crop=smart&auto=webp&s=8a3f3afcb5212efcaaa6b0d0f5e2ce9a9651ad4a', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=216&crop=smart&auto=webp&s=43cb0f172b155b9c8ea965e7802e93394bdb760b', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=320&crop=smart&auto=webp&s=72e147314d0b184fa829a484f1ef08eda36e294b', 'width': 320}, {'height': 350, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=640&crop=smart&auto=webp&s=d8829f0fedf3200bfdfc4111793ebb6a9267149d', 'width': 640}, {'height': 525, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=960&crop=smart&auto=webp&s=38bcaed085a66b404e26d59d7f768a787fd26be0', 'width': 960}, {'height': 591, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?width=1080&crop=smart&auto=webp&s=beafd61117d094cd82cfee96499f94a607a7465c', 'width': 1080}], 'source': {'height': 623, 'url': 'https://preview.redd.it/ekuq8iwx38ve1.png?auto=webp&s=231c22efe932a50bcc781adc870ca6a30bd3acbf', 'width': 1138}, 'variants': {}}]}
|
||
Multi gpu interface
| 1 |
[removed]
| 2025-04-16T16:39:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0owze/multi_gpu_interface/
|
troughtspace
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0owze
| false | null |
t3_1k0owze
|
/r/LocalLLaMA/comments/1k0owze/multi_gpu_interface/
| false | false |
self
| 1 | null |
Results of Ollama Leakage
| 2 |
[https://www.freeollama.com](https://www.freeollama.com)
https://preview.redd.it/y81w2ern68ve1.png?width=1535&format=png&auto=webp&s=ec485ba98af5259fbd179a3bb8d4a3d382a3f6a4
Many servers still seem to be missing basic security.
| 2025-04-16T16:42:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0p05d/results_of_ollama_leakage/
|
zxbsmk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0p05d
| false | null |
t3_1k0p05d
|
/r/LocalLLaMA/comments/1k0p05d/results_of_ollama_leakage/
| false | false | 2 | null |
|
Results of Ollama Leakage
| 113 |
Many servers still seem to be missing basic security.
[https://www.freeollama.com/](https://www.freeollama.com/)
| 2025-04-16T16:46:35 |
zxbsmk
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0p3h0
| false | null |
t3_1k0p3h0
|
/r/LocalLLaMA/comments/1k0p3h0/results_of_ollama_leakage/
| false | false | 113 |
{'enabled': True, 'images': [{'id': 'lReaEUW-EqI81GF5KG-Jx4OdNdsT45Nd0EEoUXb-yNw', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=108&crop=smart&auto=webp&s=996ca5f1359cb24cbfed8a46e618f8d461261810', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=216&crop=smart&auto=webp&s=a4697070bf7a92f2019a1738f1196a52039e3c3e', 'width': 216}, {'height': 121, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=320&crop=smart&auto=webp&s=7af87f3147f5994b469f8bdd976c36071ae1714c', 'width': 320}, {'height': 243, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=640&crop=smart&auto=webp&s=52549e31655556f832850c261393e3623b27e4f3', 'width': 640}, {'height': 364, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=960&crop=smart&auto=webp&s=65886dec4a42502b8a07d9f70d154a3bd4cf1d2a', 'width': 960}, {'height': 410, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?width=1080&crop=smart&auto=webp&s=d155248dc06ca523514e7b115b111433f6a943f7', 'width': 1080}], 'source': {'height': 583, 'url': 'https://preview.redd.it/kl4bv7ne78ve1.png?auto=webp&s=284857842f6ec16ddd8f3e147c4c77d19dd144d1', 'width': 1535}, 'variants': {}}]}
|
||
OpenAI Introducing OpenAI o3 and o4-mini
| 160 |
Today, OpenAI releasing OpenAI **o3** and **o4-mini,** the latest o-series of models trained to think for longer before responding. These are the smartest models they've released to date, representing a step change in ChatGPT's capabilities for everyone from curious users to advanced researchers.
| 2025-04-16T17:08:52 |
https://openai.com/index/introducing-o3-and-o4-mini/
|
stocksavvy_ai
|
openai.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0pnvl
| false | null |
t3_1k0pnvl
|
/r/LocalLLaMA/comments/1k0pnvl/openai_introducing_openai_o3_and_o4mini/
| false | false |
default
| 160 | null |
OpenAI releases o3 and 04 mini
| 1 |
*Processing img 5gkb02q8c8ve1...*
| 2025-04-16T17:13:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0prvk/openai_releases_o3_and_04_mini/
|
_Sneaky_Bastard_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0prvk
| false | null |
t3_1k0prvk
|
/r/LocalLLaMA/comments/1k0prvk/openai_releases_o3_and_04_mini/
| false | false |
self
| 1 | null |
New models benchmarks by OpenAI (o3 full and o4 mini)
| 1 |
[removed]
| 2025-04-16T17:19:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0pxej/new_models_benchmarks_by_openai_o3_full_and_o4/
|
_Sneaky_Bastard_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0pxej
| false | null |
t3_1k0pxej
|
/r/LocalLLaMA/comments/1k0pxej/new_models_benchmarks_by_openai_o3_full_and_o4/
| false | false | 1 | null |
|
New models benchmarks by ClosedAI (o3 full and o4 mini)
| 0 | 2025-04-16T17:20:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0pywt/new_models_benchmarks_by_closedai_o3_full_and_o4/
|
_Sneaky_Bastard_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0pywt
| false | null |
t3_1k0pywt
|
/r/LocalLLaMA/comments/1k0pywt/new_models_benchmarks_by_closedai_o3_full_and_o4/
| false | false | 0 | null |
||
Hugging Face has launched a reasoning datasets competition with Bespoke Labs and Together AI
| 25 |
Reasoning datasets currently dominate Hugging Face's trending datasets, but they mostly focus on code and maths. Along with Bespoke Labs and Together AI, we've launched a competition to try and diversify this landscape by encouraging new reasoning datasets focusing on underexplored domains or tasks.
Key details:
* Create a proof-of-concept dataset (minimum 100 examples)
* Upload to Hugging Face Hub with tag "reasoning-datasets-competition"
* Deadline: May 1, 2025
* Prizes: $3,000+ in cash/credits
* All participants get $50 in [Together.ai](http://Together.ai) API credits
We welcome datasets in various domains (e.g., legal, financial, literary, ethics) and novel tasks (e.g., structured data extraction, zero-shot classification). We're also interested in datasets supporting the broader "reasoning ecosystem."
For inspiration, I made my own proof of concept dataset [davanstrien/fine-reasoning-questions](https://huggingface.co/datasets/davanstrien/fine-reasoning-questions), which generates reasoning questions from web text using a pipeline approach. First, I trained a smaller ModernBERT-based classifier to identify texts that require complex reasoning, then filtered FineWeb-Edu content based on reasoning scores, classified topics, and finally used Qwen/QWQ-32B to generate the reasoning questions. I hope this approach demonstrates how you can create domain-focused reasoning datasets without starting from scratch/needing a ton of GPUs.
Full details: [https://huggingface.co/blog/bespokelabs/reasoning-datasets-competition](https://huggingface.co/blog/bespokelabs/reasoning-datasets-competition)
| 2025-04-16T17:22:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0q0bc/hugging_face_has_launched_a_reasoning_datasets/
|
dvanstrien
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0q0bc
| false | null |
t3_1k0q0bc
|
/r/LocalLLaMA/comments/1k0q0bc/hugging_face_has_launched_a_reasoning_datasets/
| false | false |
self
| 25 |
{'enabled': False, 'images': [{'id': 'KEKqZLXaDuojO8066WvfNm2knPNQpREJOqDRQbP0jOE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=108&crop=smart&auto=webp&s=c4356a09ff651d99050d2e2f7c625136bd5cc50d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=216&crop=smart&auto=webp&s=2efb5516e5e9493aedbb8874a4346aea1e2fdfe3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=320&crop=smart&auto=webp&s=5760f28068be8d1404c060058ca5dc7138a3921c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=640&crop=smart&auto=webp&s=5040e75d875b032b45e4cafad1ca6eed231c2aa5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=960&crop=smart&auto=webp&s=678233eb228e31658cc7dc6f24ff3c4c199255ec', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?width=1080&crop=smart&auto=webp&s=e9407e720f5a5c73c6566e3b787afc17181bbb3f', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/qjk8U15PVqk_QNAR85GDkE6bEzmE2vOAbbKeJark_ng.jpg?auto=webp&s=610ce8e238d743540ebac62332adfbc058d7c11d', 'width': 2400}, 'variants': {}}]}
|
Advice for coding setup
| 2 |
So, I went down a rabbit hole today trying to figure out how to crawl some websites looking for a specific item. I asked ChatGPT and it offered to wrote a Python script... I don't know python, I know perl (RIP) and some other languages (C, Java, etc. ... The usual suspects) and I don't code anything day-to-day, so I would need to rely 100% on the AI. I figured I'd give it a shot. To get everything setup and get a working script took 2-3 hours and the script is running into all sorts of issues... ChatGPT didn't know the right functions in the libraries it was using, it had a lot of trouble walking me through building the right environment to use (I wanted a Docker container based on codeserver so I could run the script on my server and use VSCode, my preferred tool), and it kept going in circles and doing complete rewrites of the script to add 1-2 lines unless I fed in the entire script and asked it to alter the script (which eats up a lot of context).
This led me to conclude that this was simply the wrong tool to do the job. I have run a number of the local LLMs before on my 3090 for odd tasks using LM Studio, but never done any coding-specific queries. I am curious best practices and recommendations for using a local LLM for coding--I thought there were tools that let you interact directly in the IDE and have it generate code directly?
Thanks in advance for any help or guidance!
| 2025-04-16T17:23:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0q1r4/advice_for_coding_setup/
|
JustTooKrul
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0q1r4
| false | null |
t3_1k0q1r4
|
/r/LocalLLaMA/comments/1k0q1r4/advice_for_coding_setup/
| false | false |
self
| 2 | null |
o4-mini is 186ᵗʰ best coder, sleep well platter! Enjoy retirement!
| 46 | 2025-04-16T17:34:46 |
BidHot8598
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qbme
| false | null |
t3_1k0qbme
|
/r/LocalLLaMA/comments/1k0qbme/o4mini_is_186ᵗʰ_best_coder_sleep_well_platter/
| false | false | 46 |
{'enabled': True, 'images': [{'id': 'lH6hbALQgV9WtLayaYnXTJltXkaGu9OJpV_iLS3eV7Y', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=108&crop=smart&auto=webp&s=0dea2157e50b60c312d120068e4b1109d6b7f9d0', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=216&crop=smart&auto=webp&s=f8ddc1d80085e06c857f630d8627172c1038231c', 'width': 216}, {'height': 429, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=320&crop=smart&auto=webp&s=180cd2648614bc80d536ca8512487529be06cf33', 'width': 320}, {'height': 858, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=640&crop=smart&auto=webp&s=3d96f80ad111301c2dfe2b713b55f0121905d377', 'width': 640}, {'height': 1287, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=960&crop=smart&auto=webp&s=504c01a25a4015f935f392e772bb4bfe12f230f3', 'width': 960}, {'height': 1448, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?width=1080&crop=smart&auto=webp&s=3f4cdb33bfd9336063236c75671370b853761b21', 'width': 1080}], 'source': {'height': 3300, 'url': 'https://preview.redd.it/0p5ymcc7g8ve1.jpeg?auto=webp&s=a44e1539477b870c7981b1c6b388d7cf0191e3d8', 'width': 2460}, 'variants': {}}]}
|
|||
OpenAI introduces codex: a lightweight coding agent that runs in your terminal
| 66 | 2025-04-16T17:42:48 |
https://github.com/openai/codex
|
MorroWtje
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qisr
| false | null |
t3_1k0qisr
|
/r/LocalLLaMA/comments/1k0qisr/openai_introduces_codex_a_lightweight_coding/
| false | false | 66 |
{'enabled': False, 'images': [{'id': 'c5yMk06ALmin9Id902Wv4x94aNFWRkxA1UNfUEWDji0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=108&crop=smart&auto=webp&s=b7d6ddeaed3541bdd3e78480c53a60c5560481ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=216&crop=smart&auto=webp&s=5d24ac2c1f9143346623c906a3bae09248d98b33', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=320&crop=smart&auto=webp&s=f20c21232d3e8fc8048c5de27df89ec890b9958d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=640&crop=smart&auto=webp&s=9814fd7577317ca58f6bc696ee800e0ebe489eab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=960&crop=smart&auto=webp&s=a2bfd2324b8c56d376e36095540cf9b877f7d345', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=1080&crop=smart&auto=webp&s=cdc96710598afb45c4842c2e1c5a8dc430fa617e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?auto=webp&s=c3ea1a9c29dbfd5d96a2d376d796c4a7d3366475', 'width': 1200}, 'variants': {}}]}
|
||
The Most Underrated Tool in AI Evals
| 6 |
Since the utterance of "Evals is all you need" developers have been trying to make sense of the right benchmarks, judge strategies, or LM Arena rankings.
Recently, more have come to prioritize "value" for their users and business. The need for contextualized evaluation begets yet new strategies of asking an LLM to assess the LLM.
But there is no need for a fancy new technique, A/B testing remains the gold-standard in evaluating ANY software change in production. That's why LauchDarkly has been plastering ads in r/LocalLLaMA.
I loved this Yelp engineering blog on how they use these offline evaluation methods to ramp up to a controlled experiment: [https://engineeringblog.yelp.com/2025/02/search-query-understanding-with-LLMs.html](https://engineeringblog.yelp.com/2025/02/search-query-understanding-with-LLMs.html)
The risks of institutionalizing bad intel outweighs the upside of launching faster. Without a robust evaluation workflow, you'll be rooting out those problems for many sprints to come.
What do you think? Can you skip the real test because the LLM told you it's all good?
| 2025-04-16T17:47:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0qmtr/the_most_underrated_tool_in_ai_evals/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qmtr
| false | null |
t3_1k0qmtr
|
/r/LocalLLaMA/comments/1k0qmtr/the_most_underrated_tool_in_ai_evals/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': '8cN5RTb0ftnaLDBbSeKMDdb3-fSxdR_OxKhoXUyn0-Y', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/tO_tXT8G3ghrFfoctqWhq07NURRhqb0I1rTG8UddjUM.jpg?width=108&crop=smart&auto=webp&s=24fb4924609d419f57b2a8d89c4ba4a844e85c27', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/tO_tXT8G3ghrFfoctqWhq07NURRhqb0I1rTG8UddjUM.jpg?width=216&crop=smart&auto=webp&s=6ecca1486cae3d29ac12557d8b25b43f2cb58dbc', 'width': 216}, {'height': 254, 'url': 'https://external-preview.redd.it/tO_tXT8G3ghrFfoctqWhq07NURRhqb0I1rTG8UddjUM.jpg?width=320&crop=smart&auto=webp&s=2ce33d2acb6e67e79d932758cac9d896082f3b8d', 'width': 320}, {'height': 508, 'url': 'https://external-preview.redd.it/tO_tXT8G3ghrFfoctqWhq07NURRhqb0I1rTG8UddjUM.jpg?width=640&crop=smart&auto=webp&s=b00e8838b4cdfb86f0228db2ccda670813bad802', 'width': 640}], 'source': {'height': 619, 'url': 'https://external-preview.redd.it/tO_tXT8G3ghrFfoctqWhq07NURRhqb0I1rTG8UddjUM.jpg?auto=webp&s=138d5750ad3ad4b3730719f665470a15dd125b77', 'width': 779}, 'variants': {}}]}
|
o3 and o4-mini are out!
| 0 | 2025-04-16T17:48:56 |
CarbonTail
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qo8a
| false | null |
t3_1k0qo8a
|
/r/LocalLLaMA/comments/1k0qo8a/o3_and_o4mini_are_out/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'ADPpkTFFZvDTTIVPipgwLcZbnevnDylLCnyME_azmeM', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/1cuyyvgoi8ve1.png?width=108&crop=smart&auto=webp&s=a392d55df93a7d1aa489d21dfc2f0790b73ee13e', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/1cuyyvgoi8ve1.png?width=216&crop=smart&auto=webp&s=5fa64b2ba8816bec3b4d3c2c70252f6fe23caf17', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/1cuyyvgoi8ve1.png?width=320&crop=smart&auto=webp&s=0cf2c3a23b3102535f33e84b57bb82ef4ba6af8a', 'width': 320}], 'source': {'height': 446, 'url': 'https://preview.redd.it/1cuyyvgoi8ve1.png?auto=webp&s=6cfc02be949a07a186d7923c86c5bbd9d31944ae', 'width': 600}, 'variants': {}}]}
|
|||
LLM distribution over Different OS
| 1 |
[removed]
| 2025-04-16T17:50:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0qpwy/llm_distribution_over_different_os/
|
No_Draft_8756
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qpwy
| false | null |
t3_1k0qpwy
|
/r/LocalLLaMA/comments/1k0qpwy/llm_distribution_over_different_os/
| false | false |
self
| 1 | null |
Social Media scheduler MCP
| 1 |
[removed]
| 2025-04-16T17:51:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0qqky/social_media_scheduler_mcp/
|
sleepysiding22
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qqky
| false | null |
t3_1k0qqky
|
/r/LocalLLaMA/comments/1k0qqky/social_media_scheduler_mcp/
| false | false |
self
| 1 | null |
Open Source tool from OpenAI for Coding Agent in terminal
| 7 |
repo: [https://github.com/openai/codex](https://github.com/openai/codex)
Real question is, can we use it with local reasoning models?
| 2025-04-16T17:57:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0qw6k/open_source_tool_from_openai_for_coding_agent_in/
|
_anotherRandomGuy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0qw6k
| false | null |
t3_1k0qw6k
|
/r/LocalLLaMA/comments/1k0qw6k/open_source_tool_from_openai_for_coding_agent_in/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'c5yMk06ALmin9Id902Wv4x94aNFWRkxA1UNfUEWDji0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=108&crop=smart&auto=webp&s=b7d6ddeaed3541bdd3e78480c53a60c5560481ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=216&crop=smart&auto=webp&s=5d24ac2c1f9143346623c906a3bae09248d98b33', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=320&crop=smart&auto=webp&s=f20c21232d3e8fc8048c5de27df89ec890b9958d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=640&crop=smart&auto=webp&s=9814fd7577317ca58f6bc696ee800e0ebe489eab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=960&crop=smart&auto=webp&s=a2bfd2324b8c56d376e36095540cf9b877f7d345', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=1080&crop=smart&auto=webp&s=cdc96710598afb45c4842c2e1c5a8dc430fa617e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?auto=webp&s=c3ea1a9c29dbfd5d96a2d376d796c4a7d3366475', 'width': 1200}, 'variants': {}}]}
|
Best deep research agents?
| 9 |
We know OpenAI Deep research is the best, then grok, perplexity are in the next tier. Are there any open source or closed implementations better than OpenAI currently?
| 2025-04-16T18:09:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0r6mb/best_deep_research_agents/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0r6mb
| false | null |
t3_1k0r6mb
|
/r/LocalLLaMA/comments/1k0r6mb/best_deep_research_agents/
| false | false |
self
| 9 | null |
Llama.cpp has much higher generation quality for Gemma 3 27B on M4 Max
| 35 |
When running the llama.cpp WebUI with:
\`\`\`sh
llama-server -m Gemma-3-27B-Instruct-Q6\_K.gguf \\
\--seed 42 \\
\--mlock \\
\--n-gpu-layers -1 \\
\--ctx-size 8096 \\
\--port 10000 \\
\--temp 1.0 \\
\--top-k 64 \\
\--top-p 0.95 \\
\--min-p 0.0
\`\`\`
And running Ollama trough OpenWebUI using the same temp, top-p, top-k, min-p, i get incredibly worse quality.
For example when i ask to add a feature to a python script, llama.cpp correctly adds the piece of code needed without any unnecessary edit, while Ollama completely rewrites the script, making a lot of stupid syntax mistakes that are so bad that the linter catches tons of them even before running it.
| 2025-04-16T18:12:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0r9pi/llamacpp_has_much_higher_generation_quality_for/
|
IonizedRay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0r9pi
| false | null |
t3_1k0r9pi
|
/r/LocalLLaMA/comments/1k0r9pi/llamacpp_has_much_higher_generation_quality_for/
| false | false |
self
| 35 | null |
Initial vibe tests for o4-mini-high and o3
| 0 | 2025-04-16T18:14:32 |
https://v.redd.it/of7tqo89n8ve1
|
sirjoaco
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0rb8h
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/of7tqo89n8ve1/DASHPlaylist.mpd?a=1747419290%2CMTNkMTFjNjVkNDQ4YzU5ZjMyNjYwYjk5YzgyZjAxZjkxYTk0ZjExY2RiODg0YjcxNzY5ZWYzODNlOTE2NDUyOQ%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/of7tqo89n8ve1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/of7tqo89n8ve1/HLSPlaylist.m3u8?a=1747419290%2CY2Q3YTY1ODIxYzkxOGY2NWMwZDU3MzI4MWY2NjViNGQyM2Q1NDE2MzZlMDdiYjdmNjFhMTg1YjVkYjM0YzUyOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/of7tqo89n8ve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k0rb8h
|
/r/LocalLLaMA/comments/1k0rb8h/initial_vibe_tests_for_o4minihigh_and_o3/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=108&crop=smart&format=pjpg&auto=webp&s=ee7e0d59becff7898c61d7b67aad1d56d497b3bd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=216&crop=smart&format=pjpg&auto=webp&s=ad402acf8107822404689dacabe2d18cb7ba4b6f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=320&crop=smart&format=pjpg&auto=webp&s=99ce2a5803d51e9af5c916c3ac99253ed629b495', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=640&crop=smart&format=pjpg&auto=webp&s=2619e0b9728fb37f7b2bd9ed5d52f0b04ff5fe58', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=960&crop=smart&format=pjpg&auto=webp&s=265106982ce59421a00fe9b0ede6f43a62ba5c8c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?width=1080&crop=smart&format=pjpg&auto=webp&s=011fe6c80b0b8645fc5150abf8626bbeb7f733e7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NGU3MWpvODluOHZlMYfHNcBLok_plq1dkpLpUsPTCNFAMU1-2Sk6x8PRc384.png?format=pjpg&auto=webp&s=671a24590eb0b5edefa2c6516067a8ea2519728b', 'width': 1920}, 'variants': {}}]}
|
||
What’s the most unexpectedly useful thing you’ve used AI for?
| 1 |
[removed]
| 2025-04-16T18:18:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0rf5k/whats_the_most_unexpectedly_useful_thing_youve/
|
Ausbel12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0rf5k
| false | null |
t3_1k0rf5k
|
/r/LocalLLaMA/comments/1k0rf5k/whats_the_most_unexpectedly_useful_thing_youve/
| false | false |
self
| 1 | null |
What are some Local search offerings that are competitive with OpenAI/Google, if such a thing can exist?
| 5 |
[I was excited to ask about the new models, but those citations are unrelated to my query \(pure hallucination\). Also 1 minute for a simple question is totally unacceptable.](https://preview.redd.it/8jfl3fj9q8ve1.png?width=2466&format=png&auto=webp&s=f48aab9b34dd199482fc0248d8f1c320a21f8331)
[I asked the same thing to 4o on a different account, with search enabled](https://preview.redd.it/kebxiirtr8ve1.jpg?width=1170&format=pjpg&auto=webp&s=64e7da8830468d9483cf9f12a246e7f0f7224580)
[The right answer was on OpenAI's blog](https://preview.redd.it/el0js5oor8ve1.png?width=1350&format=png&auto=webp&s=733cb01461f82051e39dd9f821773578bb477064)
[https://openai.com/index/introducing-o3-and-o4-mini/](https://openai.com/index/introducing-o3-and-o4-mini/)
Google was fast and didn't give me any relevant results at all, ChatGPT can't even answer questions about itself, where do I go for information?
| 2025-04-16T18:45:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0s2cx/what_are_some_local_search_offerings_that_are/
|
m1tm0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0s2cx
| false | null |
t3_1k0s2cx
|
/r/LocalLLaMA/comments/1k0s2cx/what_are_some_local_search_offerings_that_are/
| false | false | 5 | null |
|
Does MacBook Air 16gb vs 24gb madhe a difference?
| 1 |
[removed]
| 2025-04-16T18:48:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0s52b/does_macbook_air_16gb_vs_24gb_madhe_a_difference/
|
kkgmgfn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0s52b
| false | null |
t3_1k0s52b
|
/r/LocalLLaMA/comments/1k0s52b/does_macbook_air_16gb_vs_24gb_madhe_a_difference/
| false | false |
self
| 1 | null |
Anyone run into build issues with the latest releases?
| 2 |
My environment:
\- Win 11, 5900X CPU, 6900XT GPU, 5700XT GPU, 64GB Ram
I had previously built llamacpp from source with great success and used it quite often to run inference models on my pc. I decided last week to pull the latest llamacpp updates, tried to build it and now run into errors. I created an issue in GH and no response as of yet. Just curious if anyone else has encountered this?
Things I have tried:
\- remove build directory and try again
\- remove vulkan flag
trog@dor-PC UCRT64 ~/localLlama/llama.cpp
# cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=OFF -DLLAMA_BUILD_TESTS=OFF -DLLAMA_BUILD_EXAMPLES=ON -DLLAMA_
BUILD_SERVER=ON
-- Building for: Ninja
-- The C compiler identification is GNU 14.2.0
-- The CXX compiler identification is GNU 14.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/msys64/ucrt64/bin/cc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/msys64/ucrt64/bin/c++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/msys64/usr/bin/git.exe (found version "2.47.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- Including CPU backend
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
-- Found Vulkan: C:/VulkanSDK/1.4.309.0/Lib/vulkan-1.lib (found version "1.4.309") found components: glslc glslangValidator
-- Vulkan found
-- GL_KHR_cooperative_matrix supported by glslc
-- GL_NV_cooperative_matrix2 supported by glslc
-- GL_EXT_integer_dot_product supported by glslc
-- Including Vulkan backend
-- Found CURL: C:/msys64/ucrt64/lib/cmake/CURL/CURLConfig.cmake (found version "8.11.0")
-- Configuring done (5.3s)
-- Generating done (0.2s)
-- Build files have been written to: C:/Users/trog/localLlama/llama.cpp/build
trog@dor-PC UCRT64 ~/localLlama/llama.cpp
# cmake --build build --config Release
[4/161] Generating build details from Git
-- Found Git: C:/msys64/usr/bin/git.exe (found version "2.47.1")
[30/161] Generate vulkan shaders
ggml_vulkan: Generating and compiling shaders to SPIR-V
[80/161] Building CXX object examples/llava/CMakeFiles/llava.dir/llava.cpp.obj
FAILED: examples/llava/CMakeFiles/llava.dir/llava.cpp.obj
C:\msys64\ucrt64\bin\c++.exe -DGGML_USE_CPU -DGGML_USE_VULKAN -D_CRT_SECURE_NO_WARNINGS -IC:/Users/trog/localLlama/llama.cpp/examples -IC:/Users/trog/localLlama/llama.cpp/examples/llava/. -IC:/Users/trog/localLlama/llama.cpp/examples/llava/../.. -IC:/Users/trog/localLlama/llama.cpp/examples/llava/../../common -IC:/Users/trog/localLlama/llama.cpp/ggml/src/../include -IC:/Users/trog/localLlama/llama.cpp/src/. -IC:/Users/trog/localLlama/llama.cpp/src/../include -O3 -DNDEBUG -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -Wno-cast-qual -MD -MT examples/llava/CMakeFiles/llava.dir/llava.cpp.obj -MF examples\llava\CMakeFiles\llava.dir\llava.cpp.obj.d -o examples/llava/CMakeFiles/llava.dir/llava.cpp.obj -c C:/Users/trog/localLlama/llama.cpp/examples/llava/llava.cpp
In file included from C:/Users/trog/localLlama/llama.cpp/include/llama.h:4,
from C:/Users/trog/localLlama/llama.cpp/examples/llava/llava.cpp:4:
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:320:10: error: multiple definition of 'enum ggml_status'
320 | enum ggml_status {
| ^~~~~~~~~~~
In file included from C:/Users/trog/localLlama/llama.cpp/examples/llava/clip.h:4,
from C:/Users/trog/localLlama/llama.cpp/examples/llava/llava.cpp:1:
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:320:10: note: previous definition here
320 | enum ggml_status {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:339:39: error: conflicting declaration 'typedef struct ggml_bf16_t ggml_bf16_t'
339 | typedef struct { uint16_t bits; } ggml_bf16_t;
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:339:39: note: previous declaration as 'typedef struct ggml_bf16_t ggml_bf16_t'
339 | typedef struct { uint16_t bits; } ggml_bf16_t;
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:351:10: error: multiple definition of 'enum ggml_type'
351 | enum ggml_type {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:351:10: note: previous definition here
351 | enum ggml_type {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:395:10: error: multiple definition of 'enum ggml_prec'
395 | enum ggml_prec {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:395:10: note: previous definition here
395 | enum ggml_prec {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:401:10: error: multiple definition of 'enum ggml_ftype'
401 | enum ggml_ftype {
| ^~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:401:10: note: previous definition here
401 | enum ggml_ftype {
| ^~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:429:10: error: multiple definition of 'enum ggml_op'
429 | enum ggml_op {
| ^~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:429:10: note: previous definition here
429 | enum ggml_op {
| ^~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:528:10: error: multiple definition of 'enum ggml_unary_op'
528 | enum ggml_unary_op {
| ^~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:523:10: note: previous definition here
523 | enum ggml_unary_op {
| ^~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:547:10: error: multiple definition of 'enum ggml_object_type'
547 | enum ggml_object_type {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:542:10: note: previous definition here
542 | enum ggml_object_type {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:553:10: error: multiple definition of 'enum ggml_log_level'
553 | enum ggml_log_level {
| ^~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:548:10: note: previous definition here
548 | enum ggml_log_level {
| ^~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:563:10: error: multiple definition of 'enum ggml_tensor_flag'
563 | enum ggml_tensor_flag {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:558:10: note: previous definition here
558 | enum ggml_tensor_flag {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:570:12: error: redefinition of 'struct ggml_init_params'
570 | struct ggml_init_params {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:565:12: note: previous definition of 'struct ggml_init_params'
565 | struct ggml_init_params {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:578:12: error: redefinition of 'struct ggml_tensor'
578 | struct ggml_tensor {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:573:12: note: previous definition of 'struct ggml_tensor'
573 | struct ggml_tensor {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:612:25: error: redefinition of 'const size_t GGML_TENSOR_SIZE'
612 | static const size_t GGML_TENSOR_SIZE = sizeof(struct ggml_tensor);
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:607:25: note: 'const size_t GGML_TENSOR_SIZE' previously defined here
607 | static const size_t GGML_TENSOR_SIZE = sizeof(struct ggml_tensor);
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1686:10: error: multiple definition of 'enum ggml_op_pool'
1686 | enum ggml_op_pool {
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1681:10: note: previous definition here
1681 | enum ggml_op_pool {
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1728:35: error: conflicting declaration of C function 'ggml_tensor* ggml_upscale(ggml_context*, ggml_tensor*, int)'
1728 | GGML_API struct ggml_tensor * ggml_upscale(
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1727:35: note: previous declaration 'ggml_tensor* ggml_upscale(ggml_context*, ggml_tensor*, int, ggml_scale_mode)'
1727 | GGML_API struct ggml_tensor * ggml_upscale(
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1736:35: error: conflicting declaration of C function 'ggml_tensor* ggml_upscale_ext(ggml_context*, ggml_tensor*, int, int, int, int)'
1736 | GGML_API struct ggml_tensor * ggml_upscale_ext(
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1735:35: note: previous declaration 'ggml_tensor* ggml_upscale_ext(ggml_context*, ggml_tensor*, int, int, int, int, ggml_scale_mode)'
1735 | GGML_API struct ggml_tensor * ggml_upscale_ext(
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1770:10: error: multiple definition of 'enum ggml_sort_order'
1770 | enum ggml_sort_order {
| ^~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1770:10: note: previous definition here
1770 | enum ggml_sort_order {
| ^~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2176:12: error: redefinition of 'struct ggml_type_traits'
2176 | struct ggml_type_traits {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2123:12: note: previous definition of 'struct ggml_type_traits'
2123 | struct ggml_type_traits {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2193:10: error: multiple definition of 'enum ggml_sched_priority'
2193 | enum ggml_sched_priority {
| ^~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2140:10: note: previous definition here
2140 | enum ggml_sched_priority {
| ^~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2202:12: error: redefinition of 'struct ggml_threadpool_params'
2202 | struct ggml_threadpool_params {
| ^~~~~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2149:12: note: previous definition of 'struct ggml_threadpool_params'
2149 | struct ggml_threadpool_params {
| ^~~~~~~~~~~~~~~~~~~~~~
[81/161] Building CXX object examples/llava/CMakeFiles/mtmd.dir/mtmd.cpp.obj
FAILED: examples/llava/CMakeFiles/mtmd.dir/mtmd.cpp.obj
C:\msys64\ucrt64\bin\c++.exe -DGGML_USE_CPU -DGGML_USE_VULKAN -D_CRT_SECURE_NO_WARNINGS -IC:/Users/trog/localLlama/llama.cpp/examples -IC:/Users/trog/localLlama/llama.cpp/examples/llava/. -IC:/Users/trog/localLlama/llama.cpp/examples/llava/../.. -IC:/Users/trog/localLlama/llama.cpp/examples/llava/../../common -IC:/Users/trog/localLlama/llama.cpp/ggml/src/../include -IC:/Users/trog/localLlama/llama.cpp/src/. -IC:/Users/trog/localLlama/llama.cpp/src/../include -O3 -DNDEBUG -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -Wno-cast-qual -MD -MT examples/llava/CMakeFiles/mtmd.dir/mtmd.cpp.obj -MF examples\llava\CMakeFiles\mtmd.dir\mtmd.cpp.obj.d -o examples/llava/CMakeFiles/mtmd.dir/mtmd.cpp.obj -c C:/Users/trog/localLlama/llama.cpp/examples/llava/mtmd.cpp
In file included from C:/Users/trog/localLlama/llama.cpp/include/llama.h:4,
from C:/Users/trog/localLlama/llama.cpp/examples/llava/mtmd.h:5,
from C:/Users/trog/localLlama/llama.cpp/examples/llava/mtmd.cpp:3:
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:320:10: error: multiple definition of 'enum ggml_status'
320 | enum ggml_status {
| ^~~~~~~~~~~
In file included from C:/Users/trog/localLlama/llama.cpp/examples/llava/clip.h:4,
from C:/Users/trog/localLlama/llama.cpp/examples/llava/mtmd.cpp:1:
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:320:10: note: previous definition here
320 | enum ggml_status {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:339:39: error: conflicting declaration 'typedef struct ggml_bf16_t ggml_bf16_t'
339 | typedef struct { uint16_t bits; } ggml_bf16_t;
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:339:39: note: previous declaration as 'typedef struct ggml_bf16_t ggml_bf16_t'
339 | typedef struct { uint16_t bits; } ggml_bf16_t;
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:351:10: error: multiple definition of 'enum ggml_type'
351 | enum ggml_type {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:351:10: note: previous definition here
351 | enum ggml_type {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:395:10: error: multiple definition of 'enum ggml_prec'
395 | enum ggml_prec {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:395:10: note: previous definition here
395 | enum ggml_prec {
| ^~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:401:10: error: multiple definition of 'enum ggml_ftype'
401 | enum ggml_ftype {
| ^~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:401:10: note: previous definition here
401 | enum ggml_ftype {
| ^~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:429:10: error: multiple definition of 'enum ggml_op'
429 | enum ggml_op {
| ^~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:429:10: note: previous definition here
429 | enum ggml_op {
| ^~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:528:10: error: multiple definition of 'enum ggml_unary_op'
528 | enum ggml_unary_op {
| ^~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:523:10: note: previous definition here
523 | enum ggml_unary_op {
| ^~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:547:10: error: multiple definition of 'enum ggml_object_type'
547 | enum ggml_object_type {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:542:10: note: previous definition here
542 | enum ggml_object_type {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:553:10: error: multiple definition of 'enum ggml_log_level'
553 | enum ggml_log_level {
| ^~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:548:10: note: previous definition here
548 | enum ggml_log_level {
| ^~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:563:10: error: multiple definition of 'enum ggml_tensor_flag'
563 | enum ggml_tensor_flag {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:558:10: note: previous definition here
558 | enum ggml_tensor_flag {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:570:12: error: redefinition of 'struct ggml_init_params'
570 | struct ggml_init_params {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:565:12: note: previous definition of 'struct ggml_init_params'
565 | struct ggml_init_params {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:578:12: error: redefinition of 'struct ggml_tensor'
578 | struct ggml_tensor {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:573:12: note: previous definition of 'struct ggml_tensor'
573 | struct ggml_tensor {
| ^~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:612:25: error: redefinition of 'const size_t GGML_TENSOR_SIZE'
612 | static const size_t GGML_TENSOR_SIZE = sizeof(struct ggml_tensor);
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:607:25: note: 'const size_t GGML_TENSOR_SIZE' previously defined here
607 | static const size_t GGML_TENSOR_SIZE = sizeof(struct ggml_tensor);
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1686:10: error: multiple definition of 'enum ggml_op_pool'
1686 | enum ggml_op_pool {
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1681:10: note: previous definition here
1681 | enum ggml_op_pool {
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1728:35: error: conflicting declaration of C function 'ggml_tensor* ggml_upscale(ggml_context*, ggml_tensor*, int)'
1728 | GGML_API struct ggml_tensor * ggml_upscale(
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1727:35: note: previous declaration 'ggml_tensor* ggml_upscale(ggml_context*, ggml_tensor*, int, ggml_scale_mode)'
1727 | GGML_API struct ggml_tensor * ggml_upscale(
| ^~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1736:35: error: conflicting declaration of C function 'ggml_tensor* ggml_upscale_ext(ggml_context*, ggml_tensor*, int, int, int, int)'
1736 | GGML_API struct ggml_tensor * ggml_upscale_ext(
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1735:35: note: previous declaration 'ggml_tensor* ggml_upscale_ext(ggml_context*, ggml_tensor*, int, int, int, int, ggml_scale_mode)'
1735 | GGML_API struct ggml_tensor * ggml_upscale_ext(
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:1770:10: error: multiple definition of 'enum ggml_sort_order'
1770 | enum ggml_sort_order {
| ^~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:1770:10: note: previous definition here
1770 | enum ggml_sort_order {
| ^~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2176:12: error: redefinition of 'struct ggml_type_traits'
2176 | struct ggml_type_traits {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2123:12: note: previous definition of 'struct ggml_type_traits'
2123 | struct ggml_type_traits {
| ^~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2193:10: error: multiple definition of 'enum ggml_sched_priority'
2193 | enum ggml_sched_priority {
| ^~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2140:10: note: previous definition here
2140 | enum ggml_sched_priority {
| ^~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/include/ggml.h:2202:12: error: redefinition of 'struct ggml_threadpool_params'
2202 | struct ggml_threadpool_params {
| ^~~~~~~~~~~~~~~~~~~~~~
C:/Users/trog/localLlama/llama.cpp/ggml/include/ggml.h:2149:12: note: previous definition of 'struct ggml_threadpool_params'
2149 | struct ggml_threadpool_params {
| ^~~~~~~~~~~~~~~~~~~~~~
[105/161] Building CXX object ggml/src/ggml-vulkan/CMakeFiles/ggml-vulkan.dir/ggml-vulkan.cpp.obj
C:/Users/trog/localLlama/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp: In function 'vk_pipeline ggml_vk_guess_matmul_pipeline(ggml_backend_vk_context*, vk_matmul_pipeline&, uint32_t, uint32_t, bool, ggml_type, ggml_type)':
C:/Users/trog/localLlama/llama.cpp/ggml/src/ggml-vulkan/ggml-vulkan.cpp:4209:175: warning: unused parameter 'src1_type' [-Wunused-parameter]
4209 | static vk_pipeline ggml_vk_guess_matmul_pipeline(ggml_backend_vk_context * ctx, vk_matmul_pipeline& mmp, uint32_t m, uint32_t n, bool aligned, ggml_type src0_type, ggml_type src1_type) {
|
~~~~~~~~~~^~~~~~~~~
ninja: build stopped: subcommand failed.
| 2025-04-16T19:39:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0tdg8/anyone_run_into_build_issues_with_the_latest/
|
Deputius
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0tdg8
| false | null |
t3_1k0tdg8
|
/r/LocalLLaMA/comments/1k0tdg8/anyone_run_into_build_issues_with_the_latest/
| false | false |
self
| 2 | null |
A reason why local LLMs are needed.
| 1 | 2025-04-16T19:45:17 |
https://reddit.com/r/ChatGPT/comments/1k0iqdh/i_asked_chatgpt_whats_wrong_with_my_code_and_this/
|
Skodd
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0tirw
| false | null |
t3_1k0tirw
|
/r/LocalLLaMA/comments/1k0tirw/a_reason_why_local_llms_are_needed/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Ep3UWK9hG3X5k5kO9R046Uwrx0fImNHZTOfJx93fI6k', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=108&crop=smart&auto=webp&s=1d6b6bd0a8dd534fbca9ef031ed05380a7bdffcb', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=216&crop=smart&auto=webp&s=2cf88b11362998da7586c8390ccc798714ec07d3', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=320&crop=smart&auto=webp&s=ee9b377dd529a5b19a07e85208d9c24e5b1bf4b1', 'width': 320}, {'height': 553, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=640&crop=smart&auto=webp&s=0de83698ecb81269fb7c4561a047fbc84a4ca3af', 'width': 640}, {'height': 830, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=960&crop=smart&auto=webp&s=35343fb6ecc59afaecfaaa31d65e358716dd8e26', 'width': 960}, {'height': 934, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=1080&crop=smart&auto=webp&s=aaf93cbb637bd751ed2e5b1fb5921a38d9a3e4ab', 'width': 1080}], 'source': {'height': 2596, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?auto=webp&s=5a4e972c1cebe1e7ddeafcc072bad6c1ecb8faed', 'width': 3000}, 'variants': {}}]}
|
||
Massive 5000 tokens per second on 2x3090
| 189 |
For research purposes I need to process huge amounts of data as quickly as possible.
# The model
Did testing across models, and it came to be that Qwen2.5-7B is "just good enough". Bigger ones are better but slower. The two tests which were indicative were MMLU-pro (language understanding) and BBH (a bunch of tasks https://github.com/google/BIG-bench/blob/main/bigbench/benchmark\_tasks/keywords\_to\_tasks.md#summary-table).
https://preview.redd.it/mcb690qly8ve1.png?width=692&format=png&auto=webp&s=bfc9f267cd65168feae2650b4af56a0c1ac5370f
Intuitively, you can see that the jumps in performance gets smaller and smaller the bigger the models gets.
# Processing engine
There will be lots of small queries, so vLLM makes sense, but I used Aphrodite engine due to tests with speculative decoding.
# Model Quantization
Now, with 2x 3090's theres plenty of VRAM, so there shouldn't be any issue running it, however I was thinking of perhaps a larger KV cache or whatever might increase processing speed. It indeed did, on a test dataset of randomly selected scientific models, these were the results;
|Quantization|Prompt throughput t/s|Generation throughput t/s|
|:-|:-|:-|
|Unquantized|1000|300|
|AWQ / GPTQ|1300|400|
|W4A16-G128 / W8A8|2000|500|
Performance of AWQ / GTPQ and W4A16-G128 was very similar in terms of MMLU & BBH, however W8A8 was clearly superior (using llm\_eval);
`lm_eval --model vllm \`
`--model_args YOUR_MODEL,add_bos_token=true \`
`--tasks TASKHERE \`
`--num_fewshot 3 for BBH, 5 for MMLU_PRO\`
`--batch_size 'auto'`
So, I continued with the W8A8
# Speculative Decoding
Unfortunately, 7B has a different tokenizer than the smaller models, so I cannot use 0.5, 1.5 or 3B as draft model. Aphrodite supports speculative decoding through ngram, but this rougly halves performance [https://aphrodite.pygmalion.chat/spec-decoding/ngram/](https://aphrodite.pygmalion.chat/spec-decoding/ngram/)
# Final optimizations
Here's the command to run an OpenAI REST API:
Speculative DecodingSpeculative Decoding
`aphrodite run ./Qwen2.5-7B-Instruct_W8A8_custom --port 8000 -tp 2 --max_seq_len 8192 --max_model_len 8192 --max_num_seqs 32 --tensor-parallel-size 2 --gpu-memory-utilization 0.75`
Note the parameter "`max_num_seqs`" , this is the number of concurrent requests in a batch, how many requests the GPU processes at the same time. I did some benchmarking on my test set and got this results:
|max\_num\_seqs|ingest t/s|generate|
|:-|:-|:-|
|64|1000|200|
|32|3000|1000|
|16|2500|750|
They fluctuate so these are a ballpark, but the difference is clear if you run it. I chose the 32 one. Running things then in "production":
# Results
https://preview.redd.it/pe7vam5q29ve1.png?width=725&format=png&auto=webp&s=91cd4c10ab713481d093c43cd83ad4d160be6fa5
4500 t/s ingesting
825 t/s generation
with +- 5k output
| 2025-04-16T19:47:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0tkca/massive_5000_tokens_per_second_on_2x3090/
|
woozzz123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0tkca
| false | null |
t3_1k0tkca
|
/r/LocalLLaMA/comments/1k0tkca/massive_5000_tokens_per_second_on_2x3090/
| false | false | 189 | null |
|
Somebody needs to tell Nvidia to calm down with these new model names.
| 389 | 2025-04-16T20:14:27 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0u8ew
| false | null |
t3_1k0u8ew
|
/r/LocalLLaMA/comments/1k0u8ew/somebody_needs_to_tell_nvidia_to_calm_down_with/
| false | false | 389 |
{'enabled': True, 'images': [{'id': 'iFahkZGXSxbObXV3PDrKsV12kuX3qzjUr7yUspNGOxs', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?width=108&crop=smart&auto=webp&s=1807044be26f1e180d08d3d27c0899ca9b248653', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?width=216&crop=smart&auto=webp&s=7fe6b0e91f01b0f1cda8b130870572c657b9f3f2', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?width=320&crop=smart&auto=webp&s=c83a7d362bb0d3c24a2dcc3932e62e7be7fe2447', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?width=640&crop=smart&auto=webp&s=ba3293f40fb091a49f266882c48318181875c821', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?width=960&crop=smart&auto=webp&s=c3f722587cf090a1cd040038853e31574f6c4fae', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/hl0xrywo89ve1.jpeg?auto=webp&s=aa6706f4320227fc0f50ae1890a5896af4fb0851', 'width': 1024}, 'variants': {}}]}
|
|||
Local semantic memory
| 1 |
[removed]
| 2025-04-16T20:19:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0ucv3/local_semantic_memory/
|
nullprompt_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0ucv3
| false | null |
t3_1k0ucv3
|
/r/LocalLLaMA/comments/1k0ucv3/local_semantic_memory/
| false | false |
self
| 1 | null |
Hackers Hate This Hardware.
| 1 |
[removed]
| 2025-04-16T20:38:41 |
Kook_Ha
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0ut9l
| false | null |
t3_1k0ut9l
|
/r/LocalLLaMA/comments/1k0ut9l/hackers_hate_this_hardware/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'YlcMcVfPVWDotTlJxwF7R3QVRSUXqaEycbvfyI8UnVI', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=108&crop=smart&auto=webp&s=c7b5c953326a5662544a1c363142f992e64e9f75', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=216&crop=smart&auto=webp&s=ddd00b3158d0d397995783cfbc568382ca6e4a78', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=320&crop=smart&auto=webp&s=e04ff3b26893bdf047cf592428a228aee5d5623d', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=640&crop=smart&auto=webp&s=a987b71957d4c933cc8a91565866458ff258f447', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=960&crop=smart&auto=webp&s=67b5ab363238692f770fc7b96af7f0b91b5f9cd5', 'width': 960}, {'height': 719, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?width=1080&crop=smart&auto=webp&s=672404e72938a523ff130e7183b2f83e77360b26', 'width': 1080}], 'source': {'height': 2666, 'url': 'https://preview.redd.it/gj2in40oc9ve1.jpeg?auto=webp&s=5f658b648bf142f5bca751d65d5787237f811bf6', 'width': 4000}, 'variants': {}}]}
|
||
Hackers Hate This Hardware.
| 1 | 2025-04-16T20:41:05 |
https://blog.synergyit.ca/what-is-a-hardware-security-module/
|
Kook_Ha
|
blog.synergyit.ca
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0uve9
| false | null |
t3_1k0uve9
|
/r/LocalLLaMA/comments/1k0uve9/hackers_hate_this_hardware/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'yawsh56RUvP3QVDbAhrzxfLsw667SkuepvfVDx8yLl0', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?width=108&crop=smart&auto=webp&s=28b9629347df190a6110ae7699a9f625840f24f6', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?width=216&crop=smart&auto=webp&s=f8dd806cf75d7b6466cfe63003efe1115c4e01ec', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?width=320&crop=smart&auto=webp&s=e530d60e473200134b1628ca64dfc24c1bf4e0a6', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?width=640&crop=smart&auto=webp&s=7708d8da6e389fb95012be20e8561a2f05cc66c9', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?width=960&crop=smart&auto=webp&s=f6092964e6161226e99efde379fe9d940617236d', 'width': 960}], 'source': {'height': 682, 'url': 'https://external-preview.redd.it/RJn9JH-sQ5kDQ2eBzR54U4UtHAd5RdPOYD0ghfCCHbM.jpg?auto=webp&s=d899f9236a881c8c802c10a0c2f2be5dde047f4f', 'width': 1024}, 'variants': {}}]}
|
||
Need Help Fine-Tuning an LLM for a Customer Support Chatbot , Best Models & Guardrails
| 1 |
[removed]
| 2025-04-16T21:09:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0vjf6/need_help_finetuning_an_llm_for_a_customer/
|
Breathe-Co2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0vjf6
| false | null |
t3_1k0vjf6
|
/r/LocalLLaMA/comments/1k0vjf6/need_help_finetuning_an_llm_for_a_customer/
| false | false |
self
| 1 | null |
DPO for VLM : Performance Improvement guarantees
| 3 |
I have tried potentially many existing datasets -- RLAIF, POVID, SILKIE, etc, and somehow even after training them for 1/2 epochs.
Beta = 0.1, gamma = 0.1 and so on. Nothing out of ordinary, but the improvement is not even there. No benchmark improvement.
Can people share their experiences if they got it to work?
| 2025-04-16T21:10:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0vkm9/dpo_for_vlm_performance_improvement_guarantees/
|
Temporary-Mixture283
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0vkm9
| false | null |
t3_1k0vkm9
|
/r/LocalLLaMA/comments/1k0vkm9/dpo_for_vlm_performance_improvement_guarantees/
| false | false |
self
| 3 | null |
Windsurf Drops New o4 mini (small - high) at no cost until 21st April!
| 0 | 2025-04-16T21:24:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0vw1x/windsurf_drops_new_o4_mini_small_high_at_no_cost/
|
Individual_Waltz5352
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0vw1x
| false | null |
t3_1k0vw1x
|
/r/LocalLLaMA/comments/1k0vw1x/windsurf_drops_new_o4_mini_small_high_at_no_cost/
| false | false | 0 | null |
||
Is RTX5070 Ti suitable for machine learning?
| 1 |
[removed]
| 2025-04-16T21:28:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0vzub/is_rtx5070_ti_suitable_for_machine_learning/
|
EduardoRStonn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0vzub
| false | null |
t3_1k0vzub
|
/r/LocalLLaMA/comments/1k0vzub/is_rtx5070_ti_suitable_for_machine_learning/
| false | false |
self
| 1 | null |
What is the best option for running eight GPUs in a single motherboard?
| 7 |
TLDR: Can I run 8 GPUs with two 1 to 4 PCIE splitter with bifurcation on my ASUS ROG CROSSHAIR VIII DARK HERO and AMD 5950x? or I need to purchase another motherboard?
\----
Hi everyone,
I recently bought eight AMD MI50 32GB GPUs (total of 256 GB VRAM) for experimenting with 100B+ LLMs. However, I am not sure if my motherboard supports 8 GPUs. My motherboard is ASUS ROG CROSSHAIR VIII DARK HERO. It has three PCIE 4.0 x16 slots, one PCIE4.0 x1, and two M.2 PCIE4.0 x4 slots. The CPU is AMD 5950x which has 24 lanes on the CPU. I have 96GB of RAM.
Currently, both M.2 slots are occupied with NVME storage. I also installed three GPUs on all available three PCIE 4.0 x16 slots. Now, my motherboard BIOS shows each GPU is running at x8, x8 (Both MI50 cards) and x4 (RTX 3090).
My question is does this motherboard support 8 GPUs at once if I use PCIE splitter (e.g. 1 PCIE slot to 4 PCIE slots)? I see the user manual says the first PCIE 4.0 x16 slot supports PCIE bifurcation with x4+x4+x4+x4 for M.2 cards. But let's say I install 1 to 4 PCIE splitter on the first and second slot both running at x8. Can I install eight GPUs and run each of them at PCIE4.0 x2 with bifurcation (not sure if I need to purchase some other part other than 1 to 4 splitter for this)?
If not, what is the alternative? I do not want to buy a server for $1000.
Thanks!
| 2025-04-16T21:37:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0w7f9/what_is_the_best_option_for_running_eight_gpus_in/
|
MLDataScientist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0w7f9
| false | null |
t3_1k0w7f9
|
/r/LocalLLaMA/comments/1k0w7f9/what_is_the_best_option_for_running_eight_gpus_in/
| false | false |
self
| 7 | null |
LocalLLaMA inspired me to contribute to the open source community. Today, I release my first paper, a novel attention mechanism, Context-Aggregated Linear Attention.
| 2 |
[removed]
| 2025-04-16T21:41:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0wb0e/localllama_inspired_me_to_contribute_to_the_open/
|
Megneous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0wb0e
| false | null |
t3_1k0wb0e
|
/r/LocalLLaMA/comments/1k0wb0e/localllama_inspired_me_to_contribute_to_the_open/
| false | false |
self
| 2 | null |
Is RTX5070 Ti suitable for machine learning?
| 1 |
[removed]
| 2025-04-16T22:00:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0wqtc/is_rtx5070_ti_suitable_for_machine_learning/
|
EduardoRStonn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0wqtc
| false | null |
t3_1k0wqtc
|
/r/LocalLLaMA/comments/1k0wqtc/is_rtx5070_ti_suitable_for_machine_learning/
| false | false |
self
| 1 | null |
A fast, native desktop UI for transcribing audio and video using Whisper
| 54 |
Since my last post, I've added several new features such as batch processing (multiple files at once) and more.
**A fast, native desktop UI for transcribing audio and video using Whisper — built entirely in modern C++ and Qt. I’ll be regularly updating it with more features.**
[https://github.com/mehtabmahir/easy-whisper-ui](https://github.com/mehtabmahir/easy-whisper-ui)
# Features
* **Batch processing** — drag in multiple files, select several at once, or use "Open With" on multiple items; they'll run one-by-one automatically.
* **Installer handles everything** — downloads dependencies, compiles and optimizes Whisper for your system.
* **Fully C++ implementation** — no Python, no scripts, no CLI fuss.
* **GPU acceleration via Vulkan** — runs fast on AMD, Intel, or NVIDIA.
* **Drag & drop**, **Open With**, or click "Open File" — multiple ways to load media.
* **Auto-converts** to `.mp3` if needed using FFmpeg.
* **Smart conversion** — skips mp3 conversion if it's already there.
* **Dropdown menus** to pick model (e.g. `tiny`, `medium-en`, `large-v3`) and language (e.g. `en`).
* **Textbox for extra Whisper arguments** if you want advanced control.
* **Auto-downloads missing models** from Hugging Face.
* **Real-time console output** while transcription is running.
* **Transcript opens in Notepad** when finished.
* Choose between `.txt` **or** `.srt` **output** (with timestamps!).
# Requirements
* Windows 10 or later
* AMD, Intel, or NVIDIA Graphics Card with Vulkan support (almost all modern GPUs)
# Setup
1. Download the latest installer from the Releases page.
2. Run the app — that’s it. No terminal, no dependencies, no Python. Just works.
# Credits
* `whisper.cpp` by Georgi Gerganov
* FFmpeg builds by [Gyan.dev](http://Gyan.dev)
* Built with Qt
* Installer created with Inno Setup
If you’ve ever wanted a **simple, native app** for Whisper that runs fast and handles everything for you — give this a try.
Let me know what you think, I’m actively improving it!
| 2025-04-16T22:27:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0xc46/a_fast_native_desktop_ui_for_transcribing_audio/
|
mehtabmahir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0xc46
| false | null |
t3_1k0xc46
|
/r/LocalLLaMA/comments/1k0xc46/a_fast_native_desktop_ui_for_transcribing_audio/
| false | false |
self
| 54 |
{'enabled': False, 'images': [{'id': 'n8851iqu6v2B0xRAZytDgi_iQO6BODGHpxbjyy7-WTE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=108&crop=smart&auto=webp&s=9a6dd2970606b7e265ba1ec6b7cf1f9b0ceacd52', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=216&crop=smart&auto=webp&s=59e045aa7d0d098db27e48368cd5a54084beecfd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=320&crop=smart&auto=webp&s=e106fcbb6173aa4f5984812391366cab3c1bf68e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=640&crop=smart&auto=webp&s=888c61684b1628b8e6113eec392920965fb5ce31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=960&crop=smart&auto=webp&s=765e3e502ef2b6e427541ce1a181c13f23da1913', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?width=1080&crop=smart&auto=webp&s=82d22d1b5a8a581498717bd422d0d6a1b878b02c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wBdGs5uQJ4Pek_699AEW1-w7Mqe7fKjGbb4RAud7IU8.jpg?auto=webp&s=61abf6a2034de9c49bbb5bf092145a48d7fa5fb0', 'width': 1200}, 'variants': {}}]}
|
The Dangers of a Local LLM
| 1 |
[removed]
| 2025-04-16T22:29:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0xdq9/the_dangers_of_a_local_llm/
|
biggJumanji
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0xdq9
| false | null |
t3_1k0xdq9
|
/r/LocalLLaMA/comments/1k0xdq9/the_dangers_of_a_local_llm/
| false | false |
self
| 1 | null |
OpenAI in talks to buy Windsurf for about $3 billion, Bloomberg News reports
| 76 | 2025-04-16T22:48:42 |
https://www.reuters.com/technology/artificial-intelligence/openai-talks-buy-windsurf-about-3-billion-bloomberg-news-reports-2025-04-16/
|
FullstackSensei
|
reuters.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0xszu
| false | null |
t3_1k0xszu
|
/r/LocalLLaMA/comments/1k0xszu/openai_in_talks_to_buy_windsurf_for_about_3/
| false | false | 76 |
{'enabled': False, 'images': [{'id': 'AKuYQpyqJFe6N3SFKysv_CdlrgKM2m3xErJYGulliOA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=108&crop=smart&auto=webp&s=ad7a6885fe3a9da1ece27dca5f6c98a1c6ae434c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=216&crop=smart&auto=webp&s=75d45ec15236ab125927f488c5f221a0a5689d7b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=320&crop=smart&auto=webp&s=a215ef1b9d9188ea666e3304a09be15620035962', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=640&crop=smart&auto=webp&s=6b23e82dd4151b83d586d57c3f35457898dba501', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=960&crop=smart&auto=webp&s=ce54a14439b885a3088798cd75eda9ba24304acb', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?width=1080&crop=smart&auto=webp&s=0a27f83869ab11985db1fe95ccc40036541581e0', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/5O6asQHry3_JNEHfIMxM55mrnO4LZZq2kY7qqwHxrAA.jpg?auto=webp&s=1c2b5213d17cc6025e9c811e08a2c5cbe9c44bcc', 'width': 1920}, 'variants': {}}]}
|
||
This sub inspired me to contribute to the open source community. Today, I release my first paper, a novel attention mechanism, Context-Aggregated Linear Attention.
| 0 |
[removed]
| 2025-04-16T22:52:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0xvmn/this_sub_inspired_me_to_contribute_to_the_open/
|
Megneous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0xvmn
| false | null |
t3_1k0xvmn
|
/r/LocalLLaMA/comments/1k0xvmn/this_sub_inspired_me_to_contribute_to_the_open/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'NsOqX9mdlxyupf7E8BPRgwSuy91asMGAbfKGHoFPKMk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=108&crop=smart&auto=webp&s=eaa0adccfd1c73ff97b8125794e55b784cb2d0af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=216&crop=smart&auto=webp&s=39809e834b61a4e3a5dbee0c148f68be338b197f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=320&crop=smart&auto=webp&s=7ec011c86043a5b487d2e9576684f07a0e177775', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=640&crop=smart&auto=webp&s=faa112f2f6d5db729cddb79e0024457a9db402dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=960&crop=smart&auto=webp&s=be5a3b9ee1ca5b9e01fc6b56606488d2f8e356dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?width=1080&crop=smart&auto=webp&s=821791082f99001247e622983fe4c1a870a40a34', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yIc61XmsPqdJ02d1eyWbLo9h4fZ3ORdzypEFu1tSkN4.jpg?auto=webp&s=098f9cf0f5d1e1d0050be7291a259b75433a75ee', 'width': 1200}, 'variants': {}}]}
|
I think I broke the AI by being honest with it
| 1 |
[removed]
| 2025-04-16T23:09:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0y920/i_think_i_broke_the_ai_by_being_honest_with_it/
|
Initial_Pay_4110
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0y920
| false | null |
t3_1k0y920
|
/r/LocalLLaMA/comments/1k0y920/i_think_i_broke_the_ai_by_being_honest_with_it/
| false | false |
self
| 1 | null |
How to improve RAG search results ? Tips and Tricks ?
| 7 |
I can't make sense of how Embeddings are computed. I most often get random results, a friend told me to put everything in a high context window LLM and get rid of the RAG but I don't understand how that would improve the results.
I am trying to write an AI agent for Terraform, mostly to allow the team to change some values in the codebase and get information from the state straight through the Chat Interface.
I did what most AI code tools are claiming to do:
\- Parse the codebase using terraform parsing (treesitter does not work for me in this case)
\- Generate plain english description of the code
\- Computing the embeddings for the description
\- Storing the embeddings in a Vector Database
\- Searching through the embeddings by either embedding the prompt or emdedding a hallucinated answer.
The issue is that my search result are RANDOM and REALLY IRRELEVANT, I tried to lower the enthropy, thinking that embedding store the information in different part of the text (length, wording, tone, etc...) but still my results are irrelevant. For example if I search for provider version, it would appear 26th and the 25th first answers are usually the same.
I'd love to get any relevant information on embeddings that would explain how embeddings are computed with an LLM.
The setup:
\- I am using CodeQwen to generate the embeddings locally hosted through vllm
\- I store the embeddings in SurrealDB
\- I search using cosine distance
| 2025-04-16T23:28:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0yna0/how_to_improve_rag_search_results_tips_and_tricks/
|
MoiSanh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0yna0
| false | null |
t3_1k0yna0
|
/r/LocalLLaMA/comments/1k0yna0/how_to_improve_rag_search_results_tips_and_tricks/
| false | false |
self
| 7 | null |
do ollama vision models have super powers to be able to read filesystem
| 1 |
[removed]
| 2025-04-16T23:32:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0yqe5/do_ollama_vision_models_have_super_powers_to_be/
|
Limp-Grape-4821
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0yqe5
| false | null |
t3_1k0yqe5
|
/r/LocalLLaMA/comments/1k0yqe5/do_ollama_vision_models_have_super_powers_to_be/
| false | false |
self
| 1 | null |
Forget DeepSeek R2 or Qwen 3, Llama 2 is clearly our local savior.
| 269 |
No, this is not edited and it is from Artificial Analysis
| 2025-04-16T23:47:00 |
Cameo10
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0z1bk
| false | null |
t3_1k0z1bk
|
/r/LocalLLaMA/comments/1k0z1bk/forget_deepseek_r2_or_qwen_3_llama_2_is_clearly/
| false | false | 269 |
{'enabled': True, 'images': [{'id': '7Gtdr32cr4p9G8Q4deafVZpgdybKN2IpuusCELhwYaw', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/2668luheaave1.png?width=108&crop=smart&auto=webp&s=c72f1249d8ec202665fdcceafe13f0f6de55e5ed', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/2668luheaave1.png?width=216&crop=smart&auto=webp&s=b07829624edb0e690314d9e56cc4403cdab6be16', 'width': 216}, {'height': 139, 'url': 'https://preview.redd.it/2668luheaave1.png?width=320&crop=smart&auto=webp&s=9a2743504f68694a1ded16a39c0fda1d6f5bcd63', 'width': 320}, {'height': 279, 'url': 'https://preview.redd.it/2668luheaave1.png?width=640&crop=smart&auto=webp&s=84f0758fa3651c39cecde93e9b2a9cb77dacb62f', 'width': 640}], 'source': {'height': 333, 'url': 'https://preview.redd.it/2668luheaave1.png?auto=webp&s=13a19d325b786d3188374412508ce760ed6f38b3', 'width': 762}, 'variants': {}}]}
|
||
Open models thoughts
| 0 |
Okay folks, after watching the recent o3 and o4-mini demos, it's clear that the gap between open-source coding models and those gated behind APIs is beyond imagination.
What are your predictions on when we’ll see a serious quality jump in open models? Something around the same size as Qwen-Coder-2.5-32B or QwQ-32B. Are Chinese AI labs our last real hope for this? Or will sama pass us the flame torch this week (a la Prometheus)?
| 2025-04-17T00:24:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0zskh/open_models_thoughts/
|
sp4_dayz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0zskh
| false | null |
t3_1k0zskh
|
/r/LocalLLaMA/comments/1k0zskh/open_models_thoughts/
| false | false |
self
| 0 | null |
XTC in Lmstudio
| 1 |
Can you use XTC in LMStudio? What version? How? Thank you.
| 2025-04-17T00:35:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k100jc/xtc_in_lmstudio/
|
Royal_Light_9921
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k100jc
| false | null |
t3_1k100jc
|
/r/LocalLLaMA/comments/1k100jc/xtc_in_lmstudio/
| false | false |
self
| 1 | null |
Is Codex the "open source" thing OAI was touting all month? This can't be it, right?
| 0 |
https://github.com/openai/codex
sauce for those who don't know.
| 2025-04-17T00:37:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k101rd/is_codex_the_open_source_thing_oai_was_touting/
|
Neither-Phone-7264
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k101rd
| false | null |
t3_1k101rd
|
/r/LocalLLaMA/comments/1k101rd/is_codex_the_open_source_thing_oai_was_touting/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'c5yMk06ALmin9Id902Wv4x94aNFWRkxA1UNfUEWDji0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=108&crop=smart&auto=webp&s=b7d6ddeaed3541bdd3e78480c53a60c5560481ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=216&crop=smart&auto=webp&s=5d24ac2c1f9143346623c906a3bae09248d98b33', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=320&crop=smart&auto=webp&s=f20c21232d3e8fc8048c5de27df89ec890b9958d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=640&crop=smart&auto=webp&s=9814fd7577317ca58f6bc696ee800e0ebe489eab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=960&crop=smart&auto=webp&s=a2bfd2324b8c56d376e36095540cf9b877f7d345', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?width=1080&crop=smart&auto=webp&s=cdc96710598afb45c4842c2e1c5a8dc430fa617e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L2s8FUcxTxmbnY9A3xFNDqLsOD-NZikx1UTncO36YW4.jpg?auto=webp&s=c3ea1a9c29dbfd5d96a2d376d796c4a7d3366475', 'width': 1200}, 'variants': {}}]}
|
Looking for a Recommended Uncensored LLM Model
| 1 |
[removed]
| 2025-04-17T01:12:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k10qqy/looking_for_a_recommended_uncensored_llm_model/
|
Sure-Investigator824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k10qqy
| false | null |
t3_1k10qqy
|
/r/LocalLLaMA/comments/1k10qqy/looking_for_a_recommended_uncensored_llm_model/
| false | false |
self
| 1 | null |
Tried OpenAI Codex and it sucked 👎
| 23 |
OpenAI released today the Claude Code competitor, called Codex (will add link in comments).
Just tried it but failed miserable to do a simple task, first it was not even able to detect the language the codebase was in and then it failed due to context window exceeded.
Has anyone tried it? Results?
Looks promising mainly because code is open source compared to anthropic's claude code.
| 2025-04-17T01:13:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k10rtg/tried_openai_codex_and_it_sucked/
|
itzco1993
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k10rtg
| false | null |
t3_1k10rtg
|
/r/LocalLLaMA/comments/1k10rtg/tried_openai_codex_and_it_sucked/
| false | false |
self
| 23 | null |
ExLlamaV2 + Gemma3
| 0 |
Has anyone gotten Gemma3 to run on ExllamaV2? It seems the config.json/architecture isn't supported in ExLlamaV2. This kinda makes sense as this is a relatively new model and work from turboderp is now focused on ExLlamaV3. Wondering if there's a community solution/fork somewhere which integrates this? I am able to run gemma3 w/o issue on Ollama, and many other models on ExLlamaV2 (permutations of Llama & Qwen). If anyone has set this up before could you point me to resources detailing required modifications? P.S. I'm new to the space, so apologies if this is something obvious.
| 2025-04-17T01:17:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k10uqt/exllamav2_gemma3/
|
solo_patch20
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k10uqt
| false | null |
t3_1k10uqt
|
/r/LocalLLaMA/comments/1k10uqt/exllamav2_gemma3/
| false | false |
self
| 0 | null |
Honest thoughts on the OpenAI release
| 376 |
Okay bring it on
o3 and o4-mini:
\- We all know full well from many open source research (like DeepseekMath and Deepseek-R1) that if you keep scaling up the RL, it will be better -> OpenAI just scale it up and sell an APIs, there are a few different but so how much better can it get?
\- More compute, more performance, well, well, **more tokens?**
codex?
\- Github copilot used to be codex
\- Acting like there are not like a **tons of things out there: Cline, RooCode, Cursor, Windsurf,...**
Worst of all they **are hyping up the community, the open source, local, community, for their commercial interest,** throwing out vague information about Open and Mug of OpenAI on ollama account etc...
Talking about 4.1 ? coding halulu, delulu yes benchmark is good.
Yeah that's my rant, downvote me if you want. I have been in this thing since 2023, and I find it more and more annoying following these news. It's misleading, it's boring, it has nothing for us to learn about, it has nothing for us to do except for paying for their APIs and maybe contributing to their open source client, which they are doing because they know there is no point just close source software.
This is pointless and sad development of the AI community and AI companies in general, we could be so much better and so much more, accelerating so quickly, yes we are here, paying for one more token and learn nothing **(if you can call scaling RL which we all know is a LEARNING AT ALL).**
| 2025-04-17T01:22:49 |
Kooky-Somewhere-2883
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k10yak
| false | null |
t3_1k10yak
|
/r/LocalLLaMA/comments/1k10yak/honest_thoughts_on_the_openai_release/
| false | false | 376 |
{'enabled': True, 'images': [{'id': '_yM_4DvSXETb2ro8kSNnfopf6oDc50WjWVEiftHrA0s', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=108&crop=smart&format=png8&s=34f2a975d5a2d1a7dd83e69b2a192105d7da0d41', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=216&crop=smart&format=png8&s=faf6a99f42131427d71e482b2a5a2a96e4c4be15', 'width': 216}], 'source': {'height': 293, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?format=png8&s=65aab845ac405a20f77a5a597d79c469341b9b9c', 'width': 220}, 'variants': {'gif': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=108&crop=smart&s=f904f93be52d31df2ac7e7bcb91fdce635dca888', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=216&crop=smart&s=01f6b7c9517ca3a073290dfd9cde5f09acbb00d9', 'width': 216}], 'source': {'height': 293, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?s=bbbfc6bae0c0e296f7ddf9f0f85094e325dd7db2', 'width': 220}}, 'mp4': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=108&format=mp4&s=cd700c289872b253d33dde39a9a4b142e2a1e7c5', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?width=216&format=mp4&s=e6c1601e788b98bed7d72afbf4c3bd8566548443', 'width': 216}], 'source': {'height': 293, 'url': 'https://preview.redd.it/inywb6g3rave1.gif?format=mp4&s=8703c9ea650e6494280668a598cdcb458eded66c', 'width': 220}}}}]}
|
||
llama with search?
| 0 |
how exactly do i give llama or any local llm the power to search, browse the internet. something like what chatgpt search does. tia
| 2025-04-17T01:41:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k11bdh/llama_with_search/
|
IntelligentAirport26
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k11bdh
| false | null |
t3_1k11bdh
|
/r/LocalLLaMA/comments/1k11bdh/llama_with_search/
| false | false |
self
| 0 | null |
BR
| 1 |
[removed]
| 2025-04-17T02:20:38 |
https://review-us.top/85764032883501/?s=rd
|
ConfidentAnything684
|
review-us.top
| 1970-01-01T00:00:00 | 0 |
{}
|
1k122dx
| false | null |
t3_1k122dx
|
/r/LocalLLaMA/comments/1k122dx/br/
| false | false |
default
| 1 | null |
Trump administration reportedly considers a US DeepSeek ban
| 502 |
[https://techcrunch.com/2025/04/16/trump-administration-reportedly-considers-a-us-deepseek-ban/](https://techcrunch.com/2025/04/16/trump-administration-reportedly-considers-a-us-deepseek-ban/)
Washington Takes Aim at DeepSeek and Its American Chip Supplier, Nvidia: [https://www.nytimes.com/2025/04/16/technology/nvidia-deepseek-china-ai-trump.html](https://www.nytimes.com/2025/04/16/technology/nvidia-deepseek-china-ai-trump.html)
| 2025-04-17T02:44:14 |
Nunki08
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k12i6l
| false | null |
t3_1k12i6l
|
/r/LocalLLaMA/comments/1k12i6l/trump_administration_reportedly_considers_a_us/
| false | false | 502 |
{'enabled': True, 'images': [{'id': 'zv3ZV8XeQi8PXZhNO-M1AfbYyB4trMSXPcdNTlNgPH8', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=108&crop=smart&auto=webp&s=da1faca69b32f372563ec3bba4fd65576fd26049', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=216&crop=smart&auto=webp&s=924495f2234360073e80ddebbd1debb0409be49b', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=320&crop=smart&auto=webp&s=26364fb6f54f69287e29cbc132541146f87ad4e7', 'width': 320}, {'height': 651, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=640&crop=smart&auto=webp&s=027d4187d5af43a99f8442134a91d40393c2dc07', 'width': 640}, {'height': 976, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=960&crop=smart&auto=webp&s=0030bb79018de615364f9acc28cac4ac2e7f99ab', 'width': 960}, {'height': 1099, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?width=1080&crop=smart&auto=webp&s=624d6c823cd7abba66304679c685465925d91865', 'width': 1080}], 'source': {'height': 1265, 'url': 'https://preview.redd.it/80uc8c906bve1.jpeg?auto=webp&s=027d3dd509531641b653c240861513bdf942a9aa', 'width': 1243}, 'variants': {}}]}
|
||
When a coding question triggers propaganda
| 1 | 2025-04-17T02:44:34 |
https://reddit.com/r/ChatGPT/comments/1k0iqdh/i_asked_chatgpt_whats_wrong_with_my_code_and_this/
|
Skodd
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k12if0
| false | null |
t3_1k12if0
|
/r/LocalLLaMA/comments/1k12if0/when_a_coding_question_triggers_propaganda/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Ep3UWK9hG3X5k5kO9R046Uwrx0fImNHZTOfJx93fI6k', 'resolutions': [{'height': 93, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=108&crop=smart&auto=webp&s=1d6b6bd0a8dd534fbca9ef031ed05380a7bdffcb', 'width': 108}, {'height': 186, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=216&crop=smart&auto=webp&s=2cf88b11362998da7586c8390ccc798714ec07d3', 'width': 216}, {'height': 276, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=320&crop=smart&auto=webp&s=ee9b377dd529a5b19a07e85208d9c24e5b1bf4b1', 'width': 320}, {'height': 553, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=640&crop=smart&auto=webp&s=0de83698ecb81269fb7c4561a047fbc84a4ca3af', 'width': 640}, {'height': 830, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=960&crop=smart&auto=webp&s=35343fb6ecc59afaecfaaa31d65e358716dd8e26', 'width': 960}, {'height': 934, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?width=1080&crop=smart&auto=webp&s=aaf93cbb637bd751ed2e5b1fb5921a38d9a3e4ab', 'width': 1080}], 'source': {'height': 2596, 'url': 'https://external-preview.redd.it/Ytja-uZpVqJvkmw2GR8rc79-btwMPd_TnYpSqorEDhA.jpg?auto=webp&s=5a4e972c1cebe1e7ddeafcc072bad6c1ecb8faed', 'width': 3000}, 'variants': {}}]}
|
||
New restrictions for deepseek are coming
| 0 | 2025-04-17T02:46:04 |
https://selectcommitteeontheccp.house.gov/media/press-releases/moolenaar-krishnamoorthi-unveil-explosive-report-chinese-ai-firm-deepseek
|
rushpu007
|
selectcommitteeontheccp.house.gov
| 1970-01-01T00:00:00 | 0 |
{}
|
1k12jev
| false | null |
t3_1k12jev
|
/r/LocalLLaMA/comments/1k12jev/new_restrictions_for_deepseek_are_coming/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '8eoA3qGfVXBETkIXAHRFxStL3lDDsydr_zgfXG4WlK0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0RqK2ZB7wYcNiw1NTOKflZ9vaEf34dZhdxx3_Rw9gT8.jpg?width=108&crop=smart&auto=webp&s=7a7f1ded148ba8e49a0a6b2dbca435bb4e62b3a3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0RqK2ZB7wYcNiw1NTOKflZ9vaEf34dZhdxx3_Rw9gT8.jpg?width=216&crop=smart&auto=webp&s=5ae3b996643f5e6b187fc1c6aef0ebbaaa3655e4', 'width': 216}], 'source': {'height': 244, 'url': 'https://external-preview.redd.it/0RqK2ZB7wYcNiw1NTOKflZ9vaEf34dZhdxx3_Rw9gT8.jpg?auto=webp&s=ace6de974a813cf48dae4f59866b0363438d2588', 'width': 244}, 'variants': {}}]}
|
||
Lyra2, 4090 persistent memory model now up on github
| 4 |
[https://github.com/pastorjeff1/Lyra2](https://github.com/pastorjeff1/Lyra2)
| 2025-04-17T02:54:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k12oy6/lyra2_4090_persistent_memory_model_now_up_on/
|
Evening-Active1768
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k12oy6
| false | null |
t3_1k12oy6
|
/r/LocalLLaMA/comments/1k12oy6/lyra2_4090_persistent_memory_model_now_up_on/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'khQjx8SmZNDZsUKx8sdQ6GK38pM_s_9TOqXeaWivIDo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=108&crop=smart&auto=webp&s=e7ee7fae050411d887eca6a56e879d08d95521bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=216&crop=smart&auto=webp&s=f3d65ffd13019b201bac2c7e4ec8afcb832fbf8e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=320&crop=smart&auto=webp&s=106ca0cf890293f26086afd5ef319f68b56c6424', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=640&crop=smart&auto=webp&s=826660ac80711ffc97c17f2f1a7f2e9acd4457ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=960&crop=smart&auto=webp&s=8c9147d587c0068ae16cdc78c3a215e8e69a78bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?width=1080&crop=smart&auto=webp&s=edd2b0331ad9b5d79fe7c48fc4f24b2c63c183f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aWGoBx1z11CS4PcBy86TqN5lsBH9qoOJpG8nc5eIKes.jpg?auto=webp&s=142fe857cb4d68d9bf3f7fabeaab5e0ba85fd155', 'width': 1200}, 'variants': {}}]}
|
How to figure out which model can run on my 16GB 4080super. I am new to local LLM
| 2 |
I have tried running a few model which are lower quant version but I feel i should be able to run some q8 versions too . can I fit bigger models in 16GB which could use RAM to swap blocks or something with RAM and VRAM. like how it happens with image models in comfyui (SDXL etc). is there a similar thing possilbe here which could allow me to run qwen 32b etc on 16GB VRAM.
| 2025-04-17T03:03:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k12v70/how_to_figure_out_which_model_can_run_on_my_16gb/
|
Titanusgamer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k12v70
| false | null |
t3_1k12v70
|
/r/LocalLLaMA/comments/1k12v70/how_to_figure_out_which_model_can_run_on_my_16gb/
| false | false |
self
| 2 | null |
O3 is defo state of the worse
| 0 | 2025-04-17T03:11:40 |
lordchickenburger
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k130g1
| false | null |
t3_1k130g1
|
/r/LocalLLaMA/comments/1k130g1/o3_is_defo_state_of_the_worse/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'HdNR2tRkf3DYFj6PwVKJSnojLwuN-QBMozbwme_39A8', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=108&crop=smart&auto=webp&s=4c8907b987344a21b1bd274df498a8639c69026c', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=216&crop=smart&auto=webp&s=7d5a14deb97617f60a72b89b3242eaac1e8f363a', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=320&crop=smart&auto=webp&s=53969f03312fa2da4a2940d4feb6fecd817d3ce5', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=640&crop=smart&auto=webp&s=90446d373a812911113027b37743d6944455d267', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=960&crop=smart&auto=webp&s=359211a7775ff1132d524c42c3a8b54bb2645c22', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?width=1080&crop=smart&auto=webp&s=90554a80a48e7d7c4bd8c7b4dcff2ab62c2fe621', 'width': 1080}], 'source': {'height': 761, 'url': 'https://preview.redd.it/2vly2qv2bbve1.png?auto=webp&s=88fb1e8c39a511130500b6e7f5de95e537703b87', 'width': 1793}, 'variants': {}}]}
|
|||
Why does Humane still keeps hiring for on-device AI roles even after shutting down their AI Pin business?
| 1 | 2025-04-17T03:17:51 |
WordyBug
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k134co
| false | null |
t3_1k134co
|
/r/LocalLLaMA/comments/1k134co/why_does_humane_still_keeps_hiring_for_ondevice/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Z4SM8wlE30L3HhAT3Ed5w2NvExY1qW55H0nGph8MxVY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=108&crop=smart&auto=webp&s=06e3a34f5d8dfb988d1e49b9100d08ed14ef6fdc', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=216&crop=smart&auto=webp&s=906cb4b742dc77428f6b6d492ab710d2ab757b4f', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=320&crop=smart&auto=webp&s=d1f0b088b02cba2eaae9ecd17ba5590fa8e669f0', 'width': 320}, {'height': 484, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=640&crop=smart&auto=webp&s=ed4696650f5751025b4e77083427502ca2cceac7', 'width': 640}, {'height': 726, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=960&crop=smart&auto=webp&s=9d9bd5c465aeb4fb1dde06324b835b67fddc6592', 'width': 960}, {'height': 817, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?width=1080&crop=smart&auto=webp&s=275e0f0ec912aa7faeadfb5bd1c5aa1ef16b61d9', 'width': 1080}], 'source': {'height': 1212, 'url': 'https://preview.redd.it/ttvzlpsobbve1.png?auto=webp&s=b9f09299d2ba115f2cc05fc64a2cddcc10af10bb', 'width': 1602}, 'variants': {}}]}
|
|||
Which OLLAMA model best fits my Ryzen 5 5600G system for local LLM development?
| 2 |
Hi everyone,
I’ve got a local dev box with:
OS: Linux 5.15.0-130-generic
CPU: AMD Ryzen 5 5600G (12 threads)
RAM: 48 GiB total
Disk: 1 TB NVME + 1 Old HDD
GPU: AMD Radeon (no NVIDIA/CUDA)
I have ollama installed
and currently I have 2 local llm installed
deepseek-r1:1.5b & llama2:7b (3.8G)
I’m already running llama2:7B (Q4\_0, \~3.8 GiB model) at \~50% CPU load per prompt, which works well but it's not too smart I want smarter then this model. I’m building a VS Code extension that embeds a local LLM and in extenstion I have context manual capabilities and working on (enhanced context, mcp, basic agentic mode & etc) and need a model that:
* Fits comfortably in RAM
* Maximizes inference speed on 12 cores (no GPU/CUDA)
* Yields strong conversational accuracy
Given my specs and limited bandwidth (one download only), which OLLAMA model (and quantization) would you recommend?
Please let me know any additional info needed.
**TLDR;**
**As per my findings I found below things (some part is ai sugested as per my specs):**
* Qwen2.5-Coder 32B Instruct with Q8\_0 quantization is the best model (I don't confirm it, but as per my findings I found this but I am not sure)
* models like Gemma 3 27B or Mistral Small 3.1 24B as alternatives, but Qwen2.5-Coder excels (I don't confirm it, but as per my findings I found this but I am not sure)
Memory and Model Size Constraints
The memory requirement for LLMs is primarily driven by the model’s parameter count and quantization level. For a 7B model like LLaMA 2:7B, your current 3.8GB usage suggests a 4-bit quantization (approximately 3.5GB for 7B parameters at 4 bits, plus overhead). General guidelines from Ollama GitHub indicate 8GB RAM for 7B models, 16GB for 13B, and 32GB for 33B models, suggesting you can handle up to 33B parameters with your 37Gi (39.7GB) available RAM. However, larger models like 70B typically require 64GB.
Model Options and Quantization
* LLaMA 3.1 8B: Q8\_0 at 8.54GB
* Gemma 3 27B: Q8\_0 at 28.71GB, Q4\_K\_M at 16.55GB
* Mistral Small 3.1 24B: Q8\_0 at 25.05GB, Q4\_K\_M at 14.33GB
* Qwen2.5-Coder 32B: Q8\_0 at 34.82GB, Q6\_K at 26.89GB, Q4\_K\_M at 19.85GB
***Given your RAM, models up to 34.82GB (Qwen2.5-Coder 32B Q8\_0) are feasible (AI Generated)***
|Model|Parameters|Q8\_0 Size (GB)|Coding Focus|General Capabilities|Notes|
|:-|:-|:-|:-|:-|:-|
|LLaMA 3.1 8B|8B|8.54|Moderate|Strong|General purpose, smaller, good for baseline.|
|Gemma 3 27B|27B|28.71|Good|Excellent, multimodal|Supports text and images, strong reasoning, fits RAM.|
|Mistral Small 3.1 24B|24B|25.05|Very Good|Excellent, fast|Low latency, competitive with larger models, fits RAM.|
|Qwen2.5-Coder 32B|32B|34.82|Excellent|Strong|SOTA for coding, matches GPT-4o, ideal for VS Code extension, fits RAM.|
I have also checked:
* [https://aider.chat/docs/leaderboards/](https://aider.chat/docs/leaderboards/) (didn't understand since it's showing cost & accuracy, but I need cpu, ram etc usage & accuracy)
* [https://llm-stats.com/models/compare](https://llm-stats.com/models/compare) (mostly large models)
| 2025-04-17T03:57:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k13tkb/which_ollama_model_best_fits_my_ryzen_5_5600g/
|
InsideResolve4517
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k13tkb
| false | null |
t3_1k13tkb
|
/r/LocalLLaMA/comments/1k13tkb/which_ollama_model_best_fits_my_ryzen_5_5600g/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=216&crop=smart&auto=webp&s=1ef479418e186a2dd315fedc3d887521b18eec4f', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=320&crop=smart&auto=webp&s=c2bc26b548af493526b9116d26a9b305f03b1f83', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=640&crop=smart&auto=webp&s=8a4c25f54ed06b5f744ff2faad7914958769cc14', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=960&crop=smart&auto=webp&s=806c4055b855fdf17a97308fb5b399d3b773cef9', 'width': 960}, {'height': 623, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=1080&crop=smart&auto=webp&s=9f3cf9efdcefc9b636c507255c2e656d91fbb4a6', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?auto=webp&s=286f8619e702be481dea1a349a13ce7eb7a1eb9e', 'width': 1768}, 'variants': {}}]}
|
[2504.12285] BitNet b1.58 2B4T Technical Report
| 47 |
### Abstract
>We introduce BitNet b1.58 2B4T, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale. Trained on a corpus of 4 trillion tokens, the model has been rigorously evaluated across benchmarks covering language understanding, mathematical reasoning, coding proficiency, and conversational ability. Our results demonstrate that BitNet b1.58 2B4T achieves performance on par with leading open-weight, full-precision LLMs of similar size, while offering significant advantages in computational efficiency, including substantially reduced memory footprint, energy consumption, and decoding latency. To facilitate further research and adoption, the model weights are released via Hugging Face along with open-source inference implementations for both GPU and CPU architectures.
### Notables:
- They used activation functions that are compatible with activation sparsity, which means a more efficient version can be created with this base in the future.
- trained on publicly available data (Not Phi's proprietary dataset.)
- GPU implementation: (Ladder/Bitblas) https://github.com/microsoft/BitBLAS
>BitNet b1.58 2B4T employs squared ReLU. This choice is motivated by its potential to improve model sparsity and computational characteristics within the 1-bit context: [BitNet a4.8: 4-bit Activations for 1-bit LLMs](https://arxiv.org/abs/2411.04965)
>The pre-training corpus comprised a mixture of publicly available text and code datasets, including large web crawls like DCLM (Li et al., 2024b,) and educational web pages like FineWeb-EDU (Penedo et al.,, 2024). To enhance mathematical reasoning abilities, we also incorporated synthetically generated mathematical data. The data presentation strategy aligned with the two-stage training: the bulk of general web data was processed during Stage 1, while higher-quality curated datasets were emphasized during the Stage 2 cooldown phase, coinciding with the reduced learning rate
>The SFT phase utilized a diverse collection of publicly available instruction-following and conversational datasets. These included, but were not limited to, WildChat (Zhao et al.,, 2024), LMSYS-Chat-1M (Zheng et al.,, 2024), WizardLM Evol-Instruct (Xu et al., 2024a,), and SlimOrca
| 2025-04-17T03:57:28 |
https://arxiv.org/abs/2504.12285
|
Aaaaaaaaaeeeee
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1k13tui
| false | null |
t3_1k13tui
|
/r/LocalLLaMA/comments/1k13tui/250412285_bitnet_b158_2b4t_technical_report/
| false | false |
default
| 47 | null |
Fun fact: Google also has a project called Codex
| 30 |
[https://github.com/google/codex](https://github.com/google/codex)
but it's for dnn-based data compression
| 2025-04-17T03:57:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1k13tz6/fun_fact_google_also_has_a_project_called_codex/
|
Cheap_Ship6400
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k13tz6
| false | null |
t3_1k13tz6
|
/r/LocalLLaMA/comments/1k13tz6/fun_fact_google_also_has_a_project_called_codex/
| false | false |
self
| 30 |
{'enabled': False, 'images': [{'id': 'jzocPbCM67cQetJ7uQggi5MFju3rVCkZkHjkWwsHMk4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=108&crop=smart&auto=webp&s=1ba40a356029de72d9193d33458501d01d3d3468', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=216&crop=smart&auto=webp&s=6f12f0cef4a066e344ec3b2ffd69e3cc3f4b0b69', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=320&crop=smart&auto=webp&s=c9a2dee7d094e672dbc22045292bf99af7e0a823', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=640&crop=smart&auto=webp&s=fbaa3050555071165d370be9b174fe744d23a07f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=960&crop=smart&auto=webp&s=34fa9cd6f2c2e0a1245a1d01096372ca455aaee3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?width=1080&crop=smart&auto=webp&s=f3e05d6d4cf9f8564ee12347d79db8b1ca1aabc9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JVbacXw51A-AqzUwrHMNY1Wf4oQzMF2hnwx6dFd86_A.jpg?auto=webp&s=c2ac45785b18f88bf8c8cee645690013f00d689a', 'width': 1200}, 'variants': {}}]}
|
What is the latest gossip on a Qwen 3 release date?
| 48 |
I am suffering from the wait.
| 2025-04-17T04:11:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k142n9/what_is_the_latest_gossip_on_a_qwen_3_release_date/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k142n9
| false | null |
t3_1k142n9
|
/r/LocalLLaMA/comments/1k142n9/what_is_the_latest_gossip_on_a_qwen_3_release_date/
| false | false |
self
| 48 | null |
I want to make NSFW porn images
| 1 |
[removed]
| 2025-04-17T04:18:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1474b/i_want_to_make_nsfw_porn_images/
|
BugDowntown4031
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1474b
| false | null |
t3_1k1474b
|
/r/LocalLLaMA/comments/1k1474b/i_want_to_make_nsfw_porn_images/
| false | false |
nsfw
| 1 | null |
JetBrains AI now has local llms integration and is free with unlimited code completions
| 235 |
[What's New in Rider](https://www.jetbrains.com/rider/whatsnew/)
>Rider goes AI
>JetBrains AI Assistant has received a major upgrade, making AI-powered development more accessible and efficient. With this release, **AI features are now free in JetBrains IDEs**, including unlimited code completion, support for local models, and credit-based access to cloud-based features. [A new subscription system](https://www.jetbrains.com/ai-ides/buy/) makes it easy to scale up with AI Pro and AI Ultimate tiers.
>This release introduces major enhancements to boost productivity and reduce repetitive work, including smarter code completion, support for new cloud models like GPT-4.1 (сoming soon), Claude 3.7, and Gemini 2.0, advanced RAG-based context awareness, and a new Edit mode for multi-file edits directly from chat
| 2025-04-17T04:40:53 |
https://www.reddit.com/gallery/1k14k6a
|
AlgorithmicKing
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k14k6a
| false | null |
t3_1k14k6a
|
/r/LocalLLaMA/comments/1k14k6a/jetbrains_ai_now_has_local_llms_integration_and/
| false | false | 235 | null |
|
Back to Local: What’s your experience with Llama 4
| 44 |
Lots of news and discussion recently about closed-source API-only models recently (which is understandable), but let’s pivot back to local models.
What’s your recent experience with Llama 4? I actually find it quite great, better than 3.3 70B, and it’s really optimized for CPU inference. Also if it’s fits in the unified memory of your Mac it just speeds along!
| 2025-04-17T04:51:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k14pyg/back_to_local_whats_your_experience_with_llama_4/
|
Balance-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k14pyg
| false | null |
t3_1k14pyg
|
/r/LocalLLaMA/comments/1k14pyg/back_to_local_whats_your_experience_with_llama_4/
| false | false |
self
| 44 | null |
Seeking AI Project Team – Model Building / Fine-Tuning
| 1 |
[removed]
| 2025-04-17T05:26:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k159aq/seeking_ai_project_team_model_building_finetuning/
|
Formal_Passion_505
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k159aq
| false | null |
t3_1k159aq
|
/r/LocalLLaMA/comments/1k159aq/seeking_ai_project_team_model_building_finetuning/
| false | false |
self
| 1 | null |
We fought SB-1047; the same is happening in New York and now is a good time to voice opposition to the RAISE Act
| 76 |
I've been lurking r/LocalLLaMA for a while, and remember how the community reacted when lawmakers in California attempted to pass SB-1047, an anti-open weights piece of legislation that would punish derivative models and make the creators of open-weights models liable for so much that open-weights models would be legally barely viable. Some links to posts from the anti-SB-1047 era: [https://www.reddit.com/r/LocalLLaMA/comments/1es87fm/right\_now\_is\_a\_good\_time\_for\_californians\_to\_tell/](https://www.reddit.com/r/LocalLLaMA/comments/1es87fm/right_now_is_a_good_time_for_californians_to_tell/)
[https://www.reddit.com/r/LocalLLaMA/comments/1cxqtrv/california\_senate\_passes\_sb1047/](https://www.reddit.com/r/LocalLLaMA/comments/1cxqtrv/california_senate_passes_sb1047/)
[https://www.reddit.com/r/LocalLLaMA/comments/1fkfkth/quick\_reminder\_sb\_1047\_hasnt\_been\_signed\_into\_law/](https://www.reddit.com/r/LocalLLaMA/comments/1fkfkth/quick_reminder_sb_1047_hasnt_been_signed_into_law/)
Thankfully, Governor Gavin Newsom vetoed the bill, and the opposition of the open-source community was heard. However, there is now a similar threat in the state of New York: the RAISE Act (A.6453).
The RAISE Act, like SB-1047, imposes state laws that affect models everywhere. Although it does not go as far as the SB-1047, it still should be in principle opposed that a single jurisdiction can be disruptive in a general model release. Outside of that initial consideration, I have listed things I find particularly problematic with the act and its impact on AI development:
* The act imposes a rule if a model is trained with over $5m of resources, a third-party auditor must be hired to audit its compliance.
* In addition, even before you cross the $5m threshold, if you **plan** to train a model that would qualify you as a large developer, you must implement and publish a safety protocol (minus some detail requirements) and send a redacted copy to the AG before training begins.
* You may **not** deploy a frontier model if it poses an “unreasonable risk” of causing critical harm (e.g. planning a mass attack or enabling a bioweapon).
First off, it is not at all clear what constitutes an "unreasonable risk". Something like planning a mass attack is probably possible with prompt engineering on current frontier models with search capabilities already, and the potential liability implications for this "unreasonable risk" provision can stifle development. The issues I have with third-party audits is that many of these audit groups are themselves invested in the "AI safety" bubble. Rules that exist even before one starts training are also a dangerous precedent and set the precedent to far more regulatory hurdles in the future. Even if this act is not as egregious as SB-1047, it is of my opinion that this is a dangerous precedent to be passed into state law and hopefully federal legislation that is pro-development and preempts state laws like these is passed. (Although that's just one of my pipe dreams, the chance of such federal legislation is probably low, considering the Trump admin is thinking of banning DeepSeek right now).
The representative behind SB-1047 is Alex Bores of the 73rd District of New York and if you are in New York, I encourage you to contact your local representative in the New York State Assembly to oppose it.
| 2025-04-17T05:28:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k15ah1/we_fought_sb1047_the_same_is_happening_in_new/
|
Suitable-Listen355
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k15ah1
| false | null |
t3_1k15ah1
|
/r/LocalLLaMA/comments/1k15ah1/we_fought_sb1047_the_same_is_happening_in_new/
| false | false |
self
| 76 | null |
Project AiBiter: Running LLMs from Super-Compressed Files (Directly!) - PoC Success
| 0 |
Hey LLM folks,
Tired of huge model downloads and VRAM limits? I've been exploring an idea called **AiBiter (.aibit)**: a format to heavily compress models (like GPT, Llama) AND run them directly from that compressed file – no separate decompression needed.
The goal is simple: make big models usable on less powerful hardware (Colab T4, older GPUs, etc.).
**PoC Update:**
I ran a Proof of Concept using GPT-2, quantizing it to int8 and packaging it into an early .aibit format. After tackling some tricky loading challenges related to quantization states and model structures, **I got it working!**
* Original FP16 model size: \~550MB
* Quantized it to INT8 and packaged it into an early .aibit format.
* **Resulting .aibit file size: \~230MB** (a >50% reduction just with basic INT8!)
I can now load the .aibit file and run inference directly from the pre-quantized weights, seeing significant size reduction and reasonable performance (\~35 tok/s, \~300-400MB VRAM peak on T4).
**Important Caveats:**
* This is **highly experimental** and very early stage.
* It currently only works for this specific GPT-2 int8 setup.
* The format itself (currently just ZIP) isn't optimized yet.
**No Code/How-To Yet:**
Because the loading process is still quite specific and needs a lot more work to be robust and generalizable, **I'm not sharing the exact implementation details at this time.** It needs refinement before it's ready for wider use.
**Feedback Wanted:**
Does this concept of a directly runnable, ultra-compressed format sound useful? What are your biggest hurdles with model size and deployment? What would you want from something like AiBiter?
Let me know what you think!
**TL;DR:** Project AiBiter aims to compress LLMs massively AND run them directly. Got a PoC working for GPT-2 int8. Highly experimental, no code shared yet. Is this interesting/needed?
| 2025-04-17T05:34:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k15e1x/project_aibiter_running_llms_from_supercompressed/
|
AnyCookie10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k15e1x
| false | null |
t3_1k15e1x
|
/r/LocalLLaMA/comments/1k15e1x/project_aibiter_running_llms_from_supercompressed/
| false | false |
self
| 0 | null |
Help! choosing hardware
| 1 |
[removed]
| 2025-04-17T05:36:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k15eof/help_choosing_hardware/
|
Sirweeb9900
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k15eof
| false | null |
t3_1k15eof
|
/r/LocalLLaMA/comments/1k15eof/help_choosing_hardware/
| false | false |
self
| 1 | null |
What models have unusual features or strengths (forget the coding, math, etc..)
| 4 |
We know the benchmarks aren't everything - or even what matters..
| 2025-04-17T05:50:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k15mfp/what_models_have_unusual_features_or_strengths/
|
Jethro_E7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k15mfp
| false | null |
t3_1k15mfp
|
/r/LocalLLaMA/comments/1k15mfp/what_models_have_unusual_features_or_strengths/
| false | false |
self
| 4 | null |
Where can I check ai coding assistant benchmarks?
| 3 |
Any sources
| 2025-04-17T06:03:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k15thd/where_can_i_check_ai_coding_assistant_benchmarks/
|
Namra_7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k15thd
| false | null |
t3_1k15thd
|
/r/LocalLLaMA/comments/1k15thd/where_can_i_check_ai_coding_assistant_benchmarks/
| false | false |
self
| 3 | null |
o4-mini is fire🔥awesome model & free on chatgpt.com
| 0 | 2025-04-17T06:19:42 |
balianone
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k161rg
| false | null |
t3_1k161rg
|
/r/LocalLLaMA/comments/1k161rg/o4mini_is_fireawesome_model_free_on_chatgptcom/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'TgdgsQbwsNF3OSS2qz0yIoeRRasBhRbLEf2Y8shQazk', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=108&crop=smart&auto=webp&s=379e95585021dc0bc15c40cafa4dca9467e2261a', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=216&crop=smart&auto=webp&s=f8a2a8de734b64ceadc4f65ff62c21a54c1702ed', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=320&crop=smart&auto=webp&s=22474770bab0d9238e594b6e55caa9fd4a3ebd57', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=640&crop=smart&auto=webp&s=32000b3aa3ed2cb16cd398dd3964e6ecdf982fb3', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=960&crop=smart&auto=webp&s=83a2c61a3a616bf97e7e5bc60a5f37814fc96262', 'width': 960}, {'height': 419, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?width=1080&crop=smart&auto=webp&s=567a12a7f22e3ece9dbe8d73930c507e344c7cbd', 'width': 1080}], 'source': {'height': 457, 'url': 'https://preview.redd.it/rjh4dgei8cve1.png?auto=webp&s=7fc944c86c8718f6cd469753e9c200876e6cdb39', 'width': 1177}, 'variants': {}}]}
|
|||
Right GPUs for scaling out HF sentence transformers?
| 1 |
[removed]
| 2025-04-17T06:42:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16do7/right_gpus_for_scaling_out_hf_sentence/
|
9302462
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16do7
| false | null |
t3_1k16do7
|
/r/LocalLLaMA/comments/1k16do7/right_gpus_for_scaling_out_hf_sentence/
| false | false |
self
| 1 | null |
How to download mid-large llms in slow network?
| 0 |
I want to download llms (I want to prefer ollama) like in general 7b models are 4.7GiB and 14b is 8\~10GiB
but my internet is too slow 500KB/s \~ 2MB/s (Not Mb it's MB)
So what I want is if possible just download and then stop manually at some point then again download another day then stop again.
Or if network goes off due to some reason then don't start from 0 instead just start with a particular chunck or where we left from.
So is ollama support this partial download for long time?
When I tried ollama to download 3 GiB model then in the middle it was failed so I started from scractch.
Is there any way like I can manually download chuncks like 200 MB each then at the end assemble it?
| 2025-04-17T06:42:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16dp5/how_to_download_midlarge_llms_in_slow_network/
|
InsideResolve4517
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16dp5
| false | null |
t3_1k16dp5
|
/r/LocalLLaMA/comments/1k16dp5/how_to_download_midlarge_llms_in_slow_network/
| false | false |
self
| 0 | null |
so those 5060Tis....
| 10 |
this is a follow up to my post yesterday about getting hold of a pair of 5060tis
Well so far things have not gone smoothly despite me grabbing 2x different cards neither will actually physically fit in my G292-Z20, they have power cables on top of the card right in the middle meaning they dont fit in the GPU cartridges.
thankfully i have a backup, a less than ideal one but a backup no less in the form of my G431-MM0 but thats really a mining rig, it technically only has 1x per slot but it was at least a way to test and fair against the CMPs as they only have 1x
so i get them fitted in, fire up and... they arent seen by nvidia-smi and it hits me "drivers idiot" so i do some searching and find a link to the ones that supposedly supported the 5060ti from phoronix, installed them but still no cigar, i figure it must be because i was on ubuntu 22.04 which is pretty old now, so i grab the very latest ubuntu, do a clean install, install the drivers, still nope
so i bite the bullet and do something i havent in a long time, i download windows, install it, install driver, do updates and finally i grab LM studio and 2 models, gemma-27b at Q6 and QWQ-32b at Q4, i chose to load gemma first, full offload, 20k context, FA enabled and i ask it to tell me a short story
at the end of the story i got the token count, a measly 8.9 tokens per sec im sure that cannot possibly be right but so far its the best ive got, im sure something must be going very wrong somewhere though, i was fully expecting theyd absolutely trounce the CMP100-210s,
back when i ran qwen2.5-32b-q4k (admittedly with spec decoding) on 2x CMPs i was pulling 24 tokens per sec, so i just ran the same test on the 5060tis, 14.96 tokens per sec, now i know theyre limited by the 1x bus but i assumed with them being much newer and having FA and other modern features theyd still be faster despite having slower memory than the CMPs but it seems thats just not the case and the CMPs offer even better value than id imagined (if only you could have enabled 16x on them theyd have been monsters) or something is deeply wrong with the setup (ive never run LLMs under windows before)
ill keep playing about of course and hopefully soon ill workout how to fit them in the other server so i can try them with the full 16x lanes, i feel like im too early to really judge it, at least till i can get them running properly but so far they dont appear to be nearly the ultimate budget card i was hoping theyd be
ill post more info as and when i have it, hopefully others are having better results than me
| 2025-04-17T06:43:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16e8p/so_those_5060tis/
|
gaspoweredcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16e8p
| false | null |
t3_1k16e8p
|
/r/LocalLLaMA/comments/1k16e8p/so_those_5060tis/
| false | false |
self
| 10 | null |
Gemma-3 27B - My 1st time encounter with a local model that provides links to sources
| 0 |
I tried most of the popular local models, but it was Gemma-3 27B that surprised me by providing links to the sources. Have you seen any other local models with this kind of functionality?
| 2025-04-17T06:51:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16i1w/gemma3_27b_my_1st_time_encounter_with_a_local/
|
mtomas7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16i1w
| false | null |
t3_1k16i1w
|
/r/LocalLLaMA/comments/1k16i1w/gemma3_27b_my_1st_time_encounter_with_a_local/
| false | false |
self
| 0 | null |
Switching to a 5070ti with 16gb vram soon, what class of model can I use now?
| 1 |
[removed]
| 2025-04-17T07:17:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16vfj/switching_to_a_5070ti_with_16gb_vram_soon_what/
|
Benjamin_swoleman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16vfj
| false | null |
t3_1k16vfj
|
/r/LocalLLaMA/comments/1k16vfj/switching_to_a_5070ti_with_16gb_vram_soon_what/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.