title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Error: The number of tokens is greater than the context length
| 2 |
Exploring the possibilities of LM Studio for [Obsidian PKM](https://obsidian.md), through a plugin called [Copilot](https://www.obsidiancopilot.com/en) (not the MS one).
I’m using the llama-3.2-3b-instruct model. After a few successful prompts I get a non-descriptive error and the LM Studio console reports: `The number of tokens to keep from the initial prompt is greater than the context length`.
With my limited understanding my guess is I need to clear some kind of cache or start with a clean context, but how do I do this? Or is it something else that’s causing this behavior?
| 2025-04-30T19:01:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kboh36/error_the_number_of_tokens_is_greater_than_the/
|
But-I-Am-a-Robot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kboh36
| false | null |
t3_1kboh36
|
/r/LocalLLaMA/comments/1kboh36/error_the_number_of_tokens_is_greater_than_the/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'zMXPIQv3Ay2AQPtwJ908oSE8egPawgAwdCZIEwNBzQc', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=108&crop=smart&auto=webp&s=6984633aa403dcb7087885ae247b40326cd0b556', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=216&crop=smart&auto=webp&s=f9c3d978104f09b35bd7d8608d834246139e0d25', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=320&crop=smart&auto=webp&s=65d11987b4fbeb2dbeb12151782d64c22b9fa3a9', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=640&crop=smart&auto=webp&s=52f3d8313409af0f57dd2502449ca3419d14a617', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=960&crop=smart&auto=webp&s=50ce784a9606ae7d636092284f3d755de0e77fda', 'width': 960}, {'height': 619, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?width=1080&crop=smart&auto=webp&s=13730fd0a8c0314ff7d430c214a79da1a824f380', 'width': 1080}], 'source': {'height': 688, 'url': 'https://external-preview.redd.it/eZ_f2nyTYirqBDReJtey7J-BldkDOkDq7Nf-V4DhjOU.jpg?auto=webp&s=f97079200bffa354b3cbd0fa849a3fa92ab636fc', 'width': 1200}, 'variants': {}}]}
|
Advice
| 1 |
[removed]
| 2025-04-30T19:23:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbozv5/advice/
|
o_meros
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbozv5
| false | null |
t3_1kbozv5
|
/r/LocalLLaMA/comments/1kbozv5/advice/
| false | false |
self
| 1 | null |
Model outputs from LLM Benchmarks
| 1 |
[removed]
| 2025-04-30T19:51:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbpn5r/model_outputs_from_llm_benchmarks/
|
purplegranite
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbpn5r
| false | null |
t3_1kbpn5r
|
/r/LocalLLaMA/comments/1kbpn5r/model_outputs_from_llm_benchmarks/
| false | false |
self
| 1 | null |
Model outputs from LLM Benchmarks
| 1 |
[removed]
| 2025-04-30T19:54:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbpprh/model_outputs_from_llm_benchmarks/
|
purplegranite
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbpprh
| false | null |
t3_1kbpprh
|
/r/LocalLLaMA/comments/1kbpprh/model_outputs_from_llm_benchmarks/
| false | false |
self
| 1 | null |
How to run Qwen3-30B-A3B the fastest?
| 1 |
How can i run it on my laptop the fastest, i think it can handle it:
i5-11400H
RTX 3060 GPU (6GB VRAM)
32GB DDR4 RAM
| 2025-04-30T19:55:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbpqax/how_to_run_qwen330ba3b_the_fastest/
|
R46H4V
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbpqax
| false | null |
t3_1kbpqax
|
/r/LocalLLaMA/comments/1kbpqax/how_to_run_qwen330ba3b_the_fastest/
| false | false |
self
| 1 | null |
Is there a tool that lets you use local llms with search functionality?
| 4 |
I'm trying to figure out if there's a program that allows using local llms (like Qwen3 30b a3b) with a search function. The idea would be to run the model locally but still have access to real time data or external info via search. I really miss the convenience of ChatGPT’s “Browse” mode.
Anyone know of any existing tools that do this, or can explain why it's not feasible?
| 2025-04-30T19:58:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbpt4a/is_there_a_tool_that_lets_you_use_local_llms_with/
|
Rare-Site
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbpt4a
| false | null |
t3_1kbpt4a
|
/r/LocalLLaMA/comments/1kbpt4a/is_there_a_tool_that_lets_you_use_local_llms_with/
| false | false |
self
| 4 | null |
Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)
| 1 |
[removed]
| 2025-04-30T20:07:31 |
https://www.youtube.com/watch?v=aiISDmnODzo
|
SlingingBits
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbq0vw
| false |
{'oembed': {'author_name': 'Slinging Bits', 'author_url': 'https://www.youtube.com/@SlingingBits', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aiISDmnODzo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/aiISDmnODzo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kbq0vw
|
/r/LocalLLaMA/comments/1kbq0vw/llama4maverick17b128einstruct_benchmark_mac/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8.jpeg?width=108&crop=smart&auto=webp&s=689a9d60f9d1626cdc53d1af30c56740206841d1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8.jpeg?width=216&crop=smart&auto=webp&s=ed46823c9bff4dd0ecccf3c4bd3bf73f230d19d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8.jpeg?width=320&crop=smart&auto=webp&s=e5cede9d37df1bcec00e0c2442017311080e3295', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8.jpeg?auto=webp&s=05a797985906081843cb33181f7d5e8ccd95b9f7', 'width': 480}, 'variants': {}}]}
|
|
New MONSTER IS COMING
| 1 |
[removed]
| 2025-04-30T20:21:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbqcfj/new_monster_is_coming/
|
omar07ibrahim1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbqcfj
| false | null |
t3_1kbqcfj
|
/r/LocalLLaMA/comments/1kbqcfj/new_monster_is_coming/
| false | false | 1 | null |
|
Qwen3 quants for OpenVINO are up
| 9 |
https://huggingface.co/collections/Echo9Zulu/openvino-qwen3-68128401a294e27d62e946bc
Inference code examples are coming soon. Started learning hf library this week to automate the process as it's hard to maintain so many repos
| 2025-04-30T20:24:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbqexm/qwen3_quants_for_openvino_are_up/
|
Echo9Zulu-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbqexm
| false | null |
t3_1kbqexm
|
/r/LocalLLaMA/comments/1kbqexm/qwen3_quants_for_openvino_are_up/
| false | false |
self
| 9 |
{'enabled': False, 'images': [{'id': 'XTtRLQOu7bo3bdUyWOiuyW3ZwqIFoayqeCjcqsTDBfw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=108&crop=smart&auto=webp&s=31d09d14495d8c7d3887968efcee4d720da52e6c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=216&crop=smart&auto=webp&s=141a602fefe3f112d8089a06fb4b7683b041726c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=320&crop=smart&auto=webp&s=2b75001b363368b9a54b10734b17349d0d91107a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=640&crop=smart&auto=webp&s=679014e992a00ac18ee4c394190656b5d2b01768', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=960&crop=smart&auto=webp&s=8f95a58e6a86e82b1198bf3be5be6eb28aab607a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?width=1080&crop=smart&auto=webp&s=dfc388ec9d387ac2ef73c750b9a72b042a487ce1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n_AcUnOQsQgpqtcFUYBrD3RtcJ_Xzx8k_IjA3t8Nuc4.jpg?auto=webp&s=848769191cf7ca4d813c8b31ed23c8186d60aec6', 'width': 1200}, 'variants': {}}]}
|
Buying Tablet with 8-12 GB RAM, Is this enough for small models 1B/3B?
| 1 |
Buying Tablet (Lenovo Idea Tab Pro or Xiaomi Pad 7) with 8-12 GB RAM. RAM can't be expandable on these devices. And no VRAM I think. So 8-12GB is enough to run small models like 1B, 1.5B upto 3B models? Planning to use small Gemma, Llama, Qwen, DS models.
What's your experience on running small models on Tablet / Smartphone? Are you getting decent performance? Is it possible to get 20 token per second? Please let me know your opinions & recommendations. Thanks.
(My smartphone on a repair process since last week so I couldn't test this myself before buying this Tablet. )
| 2025-04-30T20:28:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbqit3/buying_tablet_with_812_gb_ram_is_this_enough_for/
|
pmttyji
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbqit3
| false | null |
t3_1kbqit3
|
/r/LocalLLaMA/comments/1kbqit3/buying_tablet_with_812_gb_ram_is_this_enough_for/
| false | false |
self
| 1 | null |
Openrouter Qwen3 235B A22B doesnt support attached images
| 1 |
[removed]
| 2025-04-30T20:49:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbr07r/openrouter_qwen3_235b_a22b_doesnt_support/
|
TemperatureEast2658
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbr07r
| false | null |
t3_1kbr07r
|
/r/LocalLLaMA/comments/1kbr07r/openrouter_qwen3_235b_a22b_doesnt_support/
| false | false | 1 | null |
|
Zuckerberg watching you use Qwen 3 instead of LLaMA 4
| 1 | 2025-04-30T21:26:19 |
https://v.redd.it/mi9b5ww4i1ye1
|
ImportantOwl2939
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbrvat
| false |
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/mi9b5ww4i1ye1/DASHPlaylist.mpd?a=1748640391%2CNTA3NmI3NDc5YjMyNWY3ZTNiZjdmZmFiZWZhOTU5NDQ4MWJlODBiZTMwNWMwYjIzYWI1M2RjNzkwMzQwYzcwZg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/mi9b5ww4i1ye1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 640, 'hls_url': 'https://v.redd.it/mi9b5ww4i1ye1/HLSPlaylist.m3u8?a=1748640391%2CNDEzOTQ3YjE2NmNmN2E5ZTY4NWVjNjVmMTUyMTRjNDVlNDg5NTFlZGQ0Yjk4YzkxZDMyOWFhOTA2ZDVmOTIyYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mi9b5ww4i1ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 360}}
|
t3_1kbrvat
|
/r/LocalLLaMA/comments/1kbrvat/zuckerberg_watching_you_use_qwen_3_instead_of/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bjJwOG93dzRpMXllMTJ2ksL0TqRA9AiGYMSLRxpwBKEwrfcDeNY9S2Lee5cI', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bjJwOG93dzRpMXllMTJ2ksL0TqRA9AiGYMSLRxpwBKEwrfcDeNY9S2Lee5cI.png?width=108&crop=smart&format=pjpg&auto=webp&s=564a4dd6eafb398b9e8885d21a5f67530b654ef9', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bjJwOG93dzRpMXllMTJ2ksL0TqRA9AiGYMSLRxpwBKEwrfcDeNY9S2Lee5cI.png?width=216&crop=smart&format=pjpg&auto=webp&s=aaddf698cef121f9227f2dc96bc7acca6bd7f4b4', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/bjJwOG93dzRpMXllMTJ2ksL0TqRA9AiGYMSLRxpwBKEwrfcDeNY9S2Lee5cI.png?width=320&crop=smart&format=pjpg&auto=webp&s=f85b344223150d325e855cfcbf487ee77ed3ecb7', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bjJwOG93dzRpMXllMTJ2ksL0TqRA9AiGYMSLRxpwBKEwrfcDeNY9S2Lee5cI.png?format=pjpg&auto=webp&s=1aa88947383940c63d794602a88e08dd8ee40698', 'width': 360}, 'variants': {}}]}
|
||
Lm studio makes the computer slow for no reason
| 5 |
With 64gb of ram and 12gb vram, if I put 14B model in the VRAM and don't even use it, just load it, my PC becomes unusably slow.
What is this?
| 2025-04-30T21:33:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbs12l/lm_studio_makes_the_computer_slow_for_no_reason/
|
Osama_Saba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbs12l
| false | null |
t3_1kbs12l
|
/r/LocalLLaMA/comments/1kbs12l/lm_studio_makes_the_computer_slow_for_no_reason/
| false | false |
self
| 5 | null |
Qwen3 32B and 30B-A3B run at similar speed?
| 10 |
Should I expect a large speed difference between 32B and 30B-A3B if I'm running quants that fit entirely in VRAM?
- 32B gives me 24 tok/s
- 30B-A3B gives me 30 tok/s
I'm seeing lots of people praising 30B-A3B's speed, so I feel like there should be a way for me to get it to run even faster. Am I missing something?
| 2025-04-30T21:45:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbsbjp/qwen3_32b_and_30ba3b_run_at_similar_speed/
|
INT_21h
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbsbjp
| false | null |
t3_1kbsbjp
|
/r/LocalLLaMA/comments/1kbsbjp/qwen3_32b_and_30ba3b_run_at_similar_speed/
| false | false |
self
| 10 | null |
What ever happened to bigscience and BLOOM?
| 11 |
I remember hearing about them a few years back for making a model as good as GPT3 or something, and then never heard of them again. Are they still making models? And as for BLOOM, huggingface says they got 4k downloads over the past month. Who's downloading a 2 year old model?
| 2025-04-30T21:53:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbsifd/what_ever_happened_to_bigscience_and_bloom/
|
Neither-Phone-7264
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbsifd
| false | null |
t3_1kbsifd
|
/r/LocalLLaMA/comments/1kbsifd/what_ever_happened_to_bigscience_and_bloom/
| false | false |
self
| 11 | null |
A question which non-thinking models (and Qwen3) cannot properly answer
| 5 |
Just saw the German Wer Wird Millionär question and tried it out in ChatGPT o3. It solved it without issues. o4-mini also did, 4o and 4.5 on the other hand could not. Gemini 2.5 also came to the correct conclusion, even without executing code which the o3/4 models used. Interestingly, the new Qwen3 models all failed the question, even when thinking.
Question:
> Schreibt man alle Zahlen zwischen 1 und 1000 aus und ordnet sie Alphabetisch, dann ist die Summe der ersten und der letzten Zahl…?
Correct answer:
> 8 (Acht) + 12 (Zwölf) = 20
| 2025-04-30T21:55:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbskb9/a_question_which_nonthinking_models_and_qwen3/
|
Danmoreng
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbskb9
| false | null |
t3_1kbskb9
|
/r/LocalLLaMA/comments/1kbskb9/a_question_which_nonthinking_models_and_qwen3/
| false | false |
self
| 5 | null |
Rtx 3090 set itself on fire, why?
| 0 |
After running training on my rtx 3090 connected with a pretty flimsy oculink connection, it lagged the whole system (8x rtx 3090 rig) and just was very hot. I unplugged the server, waited 30s and then replugged it. Once I plugged it in, smoke went out of one 3090. The whole system still works fine, all 7 gpus still work but this GPU now doesn't even have fans turned on when plugged in.
I stripped it off to see what's up. On the right side I see something burnt which also smells. What is it? Is the rtx 3090 still fixable? Can I debug it? I am equipped with a multimeter.
| 2025-04-30T22:02:01 |
https://www.reddit.com/gallery/1kbspu2
|
Armym
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbspu2
| false | null |
t3_1kbspu2
|
/r/LocalLLaMA/comments/1kbspu2/rtx_3090_set_itself_on_fire_why/
| false | false | 0 | null |
|
Rtx 3090 set itself on fire, why?
| 7 |
After running training on my rtx 3090 connected with a pretty flimsy oculink connection, it lagged the whole system (8x rtx 3090 rig) and just was very hot. I unplugged the server, waited 30s and then replugged it. Once I plugged it in, smoke went out of one 3090. The whole system still works fine, all 7 gpus still work but this GPU now doesn't even have fans turned on when plugged in.
I stripped it off to see what's up. On the right side I see something burnt which also smells. What is it? Is the rtx 3090 still fixable? Can I debug it? I am equipped with a multimeter.
| 2025-04-30T22:02:08 |
https://www.reddit.com/gallery/1kbspxz
|
Armym
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbspxz
| false | null |
t3_1kbspxz
|
/r/LocalLLaMA/comments/1kbspxz/rtx_3090_set_itself_on_fire_why/
| false | false | 7 | null |
|
ATTAP (ALL THINGS TO ALL PEOPLE) invites you to The World's First VIBEATHON at TECH WEEK NYC June 2 #NYTechWeek
Grab Your Invite Now
https://partiful.com/e/dFWMaelGpf0HW7SUwq6J
| 0 | 2025-04-30T22:02:56 |
brucespector
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbsqnb
| false | null |
t3_1kbsqnb
|
/r/LocalLLaMA/comments/1kbsqnb/attap_all_things_to_all_people_invites_you_to_the/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'xQFIb2C99bHfUrbqyTiRsntzsKO1A_aSU-KhPKmSxwA', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=108&crop=smart&auto=webp&s=77886586e3b572fd82ba73a3c345f05037e667ba', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=216&crop=smart&auto=webp&s=91c6e38bc4e684ee801ee53499948c8c74cdc65f', 'width': 216}, {'height': 323, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=320&crop=smart&auto=webp&s=ae700d6a4231e48cb62ee9a56b0a3ac357827ce9', 'width': 320}, {'height': 646, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=640&crop=smart&auto=webp&s=7c06014b31eb4d1b825f11038ce1c2702769b3e4', 'width': 640}, {'height': 969, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=960&crop=smart&auto=webp&s=38b095489b74a81f3579fc0e17e6f4a1563376b4', 'width': 960}, {'height': 1091, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?width=1080&crop=smart&auto=webp&s=436f718fbc86cfb5a5dcf63354406874b0ccdcef', 'width': 1080}], 'source': {'height': 1372, 'url': 'https://preview.redd.it/y2anua5ko1ye1.jpeg?auto=webp&s=0122d96abe1d034858aebbc69043baf8d1150dd0', 'width': 1358}, 'variants': {}}]}
|
|||
XiaomiMiMo/MiMo: MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining
| 9 | 2025-04-30T22:06:47 |
https://github.com/XiaomiMiMo/MiMo?tab=readme-ov-file
|
boxingdog
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbstrf
| false | null |
t3_1kbstrf
|
/r/LocalLLaMA/comments/1kbstrf/xiaomimimomimo_mimo_unlocking_the_reasoning/
| false | false | 9 |
{'enabled': False, 'images': [{'id': '_4MhiwEpx6CK9WuTVAK-2l6xUt9U7PW_DYzFEFuUhL4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=108&crop=smart&auto=webp&s=fcd6528a416d95d146d3dd930ae52329f379f7df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=216&crop=smart&auto=webp&s=8094e76e5bdb683b175c566123eed370b2d21f3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=320&crop=smart&auto=webp&s=9580c5ed6fc5c9022d3331fd0922e32e5df293f4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=640&crop=smart&auto=webp&s=fe82ed99b55d6cc70a977e52fe1842c44cae7c1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=960&crop=smart&auto=webp&s=6f633e58e677d294c917636755c8769d2a4a6cf9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?width=1080&crop=smart&auto=webp&s=38ae957b7a47c4240cdb0f85f91a3cad487adf7c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nPaejxX7jcWYEu5uSng0F6WIX9i1mJj1qr9grD9rpQ0.jpg?auto=webp&s=34a41a29bc2c65229e4b4fef1712adb2cfe4b716', 'width': 1200}, 'variants': {}}]}
|
||
Surprised by people hyping up Qwen3-30B-A3B when it gets outmatched by Qwen3-8b
| 0 |
It is good and it is fast but I've tried so hard to love it but all I get is inconsistent and questionable intelligence with thinking enabled and without thinking enabled, it loses to Gemma 4B. Hallucinations are very high.
I have compared it with:
- Gemma 12b QAT 4_0
- Qwen3-8B-Q4_K_KXL with think enabled.
Qwen3-30B-A3B_Q4_KM with think enabled:
- Fails 30% of the times to above models
- Matches 70%
- Does not exceed them in anything.
Qwen3-30B-A3B_Q4_KM think disabled
- Fails 60-80% on the same questions those 2 modes get perfectly.
It somehow just gaslights itself during thinking into producing the wrong answer when 8b is smoother.
In my limited Vram, 8gb, 32b system ram, I get better speeds with the 8b model and better intelligence. It is incredibly disappointing.
I used the recommended configurations and chat templates on the official repo, re-downloaded the fixed quants.
What's the experience of you guys??? Please give 8b a try and compare.
| 2025-04-30T22:15:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbt0i9/surprised_by_people_hyping_up_qwen330ba3b_when_it/
|
deep-taskmaster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbt0i9
| false | null |
t3_1kbt0i9
|
/r/LocalLLaMA/comments/1kbt0i9/surprised_by_people_hyping_up_qwen330ba3b_when_it/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
Openrouter QWEN3 and QWEN 2.5 VL not accepting image attachements
| 1 |
[removed]
| 2025-04-30T22:23:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbt7i9/openrouter_qwen3_and_qwen_25_vl_not_accepting/
|
SerbianSlavic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbt7i9
| false | null |
t3_1kbt7i9
|
/r/LocalLLaMA/comments/1kbt7i9/openrouter_qwen3_and_qwen_25_vl_not_accepting/
| false | false | 1 | null |
|
Openrouter QWEN3 and QWEN 2.5 VL not accepting image attachements
| 1 |
[removed]
| 2025-04-30T22:29:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbtbnw/openrouter_qwen3_and_qwen_25_vl_not_accepting/
|
SerbianSlavic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbtbnw
| false | null |
t3_1kbtbnw
|
/r/LocalLLaMA/comments/1kbtbnw/openrouter_qwen3_and_qwen_25_vl_not_accepting/
| false | false |
self
| 1 | null |
Mercury, the world’s first commercial-scale diffusion language model
| 0 | 2025-04-30T22:33:45 |
https://www.inceptionlabs.ai/introducing-mercury
|
Regular_Working6492
|
inceptionlabs.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbtf9g
| false | null |
t3_1kbtf9g
|
/r/LocalLLaMA/comments/1kbtf9g/mercury_the_worlds_first_commercialscale/
| false | false |
default
| 0 | null |
|
Best embedding model and <3B local LLM for Dutch in RAG
| 1 |
[removed]
| 2025-04-30T22:47:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbtplp/best_embedding_model_and_3b_local_llm_for_dutch/
|
Material-Ad5426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbtplp
| false | null |
t3_1kbtplp
|
/r/LocalLLaMA/comments/1kbtplp/best_embedding_model_and_3b_local_llm_for_dutch/
| false | false |
self
| 1 | null |
kluster.ai now hosting Qwen3-235B-A22B
| 4 |
I like it better than o1 and deepseek-R1. What do y’all think?
| 2025-04-30T22:47:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbtpxq/klusterai_now_hosting_qwen3235ba22b/
|
swarmster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbtpxq
| false | null |
t3_1kbtpxq
|
/r/LocalLLaMA/comments/1kbtpxq/klusterai_now_hosting_qwen3235ba22b/
| false | false |
self
| 4 | null |
Can you put a local ai in a project and make it analize the whole source code ?
| 1 |
Is it possible to make it have all the context at the moment ?
| 2025-04-30T22:58:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbtyjc/can_you_put_a_local_ai_in_a_project_and_make_it/
|
Intelligent_Pie_8729
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbtyjc
| false | null |
t3_1kbtyjc
|
/r/LocalLLaMA/comments/1kbtyjc/can_you_put_a_local_ai_in_a_project_and_make_it/
| false | false |
self
| 1 | null |
Qwen3-30B-A3B: Ollama vs LMStudio Speed Discrepancy (30tk/s vs 150tk/s) – Help?
| 81 |
I’m trying to run the **Qwen3-30B-A3B-GGUF** model on my PC and noticed a huge performance difference between **Ollama** and **LMStudio**. Here’s the setup:
* **Same model**: Qwen3-30B-A3B-GGUF.
* **Same hardware**: Windows 11 Pro, RTX 5090, 128GB RAM.
* **Same context window**: 4096 tokens.
**Results:**
* **Ollama**: \~30 tokens/second.
* **LMStudio**: \~150 tokens/second.
I’ve tested both with identical prompts and model settings. The difference is massive, and I’d prefer to use Ollama.
**Questions:**
1. Has anyone else seen this gap in performance between Ollama and LMStudio?
2. Could this be a configuration issue in Ollama?
3. Any tips to optimize Ollama’s speed for this model?
| 2025-04-30T23:10:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbu7wf/qwen330ba3b_ollama_vs_lmstudio_speed_discrepancy/
|
az-big-z
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbu7wf
| false | null |
t3_1kbu7wf
|
/r/LocalLLaMA/comments/1kbu7wf/qwen330ba3b_ollama_vs_lmstudio_speed_discrepancy/
| false | false |
self
| 81 | null |
How do i fine-tune an llm (or is there an off the shelf version for my needs?)
| 1 |
Hey y'all,
I'm working on a computer using agent which currently uses gemini, but its kinda crappy plus i wanna try to go for the privacy angle by serving the llm locally. it's gonna be mac exclusive and run on m-series chips only (cause intel macs suck), so i'm just wondering if there's any off the shelf optimized cua models? if not, how would i train a model? i have a base model, i wanna use Qwen3 0.6b (it's kinda smart for it's size but still really silly for important computer use tasks)
Let me know!!! thanks
| 2025-04-30T23:11:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbu8jw/how_do_i_finetune_an_llm_or_is_there_an_off_the/
|
YaBoiGPT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbu8jw
| false | null |
t3_1kbu8jw
|
/r/LocalLLaMA/comments/1kbu8jw/how_do_i_finetune_an_llm_or_is_there_an_off_the/
| false | false |
self
| 1 | null |
Qwen3-235B-A22B on livebench
| 86 | 2025-05-01T00:19:11 |
https://www.reddit.com/gallery/1kbvna2
|
AaronFeng47
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvna2
| false | null |
t3_1kbvna2
|
/r/LocalLLaMA/comments/1kbvna2/qwen3235ba22b_on_livebench/
| false | false | 86 | null |
||
How to land an AI/ML Engineer job in 2 months in the US
| 1 |
[removed]
| 2025-05-01T00:20:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvofj/how_to_land_an_aiml_engineer_job_in_2_months_in/
|
Intelligent-Boat9824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvofj
| false | null |
t3_1kbvofj
|
/r/LocalLLaMA/comments/1kbvofj/how_to_land_an_aiml_engineer_job_in_2_months_in/
| false | false |
self
| 1 | null |
Is it possible to give a non-vision model vision?
| 2 |
I'd like to give vision capabilities to an r1 distilled model. Would that be possible? I have the resources to finetune if needed
| 2025-05-01T00:21:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvp4a/is_it_possible_to_give_a_nonvision_model_vision/
|
maxwell321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvp4a
| false | null |
t3_1kbvp4a
|
/r/LocalLLaMA/comments/1kbvp4a/is_it_possible_to_give_a_nonvision_model_vision/
| false | false |
self
| 2 | null |
Phi 4 Reasoning
| 116 | 2025-05-01T00:25:02 |
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/04/phi_4_reasoning.pdf
|
adefa
|
microsoft.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvrgs
| false | null |
t3_1kbvrgs
|
/r/LocalLLaMA/comments/1kbvrgs/phi_4_reasoning/
| false | false |
default
| 116 | null |
|
Phi 4 reasoning dropped a few minutes ago
| 1 |
Battle of the small reasoners is heating up
Mimo , GLM4-Z1, Qwen3 and now this
| 2025-05-01T00:27:57 |
https://huggingface.co/microsoft/Phi-4-reasoning
|
datbackup
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvti7
| false | null |
t3_1kbvti7
|
/r/LocalLLaMA/comments/1kbvti7/phi_4_reasoning_dropped_a_few_minutes_ago/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'qtFRsVUHHSscVBLKk81CTXXwniwSEY09ICcAY3SjgeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=108&crop=smart&auto=webp&s=72a36ee08cc9ebaacf0b42e0836d96adfa01fd33', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=216&crop=smart&auto=webp&s=8b7383e34c8d8d73ad76b401b540961f847b1a2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=320&crop=smart&auto=webp&s=5a7ab717c7d245951dbaee99652aa5cac047d977', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=640&crop=smart&auto=webp&s=6d5d1eccafad99c7ee9d2abed2ab5c72d51e01dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=960&crop=smart&auto=webp&s=025078f1abc697b435610b0b667b187773c94df0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=1080&crop=smart&auto=webp&s=01bfedd96a19487c0cf5baa39041f8661ac42146', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?auto=webp&s=35a1ed7cdd54b483e926627f69dde2689174085f', 'width': 1200}, 'variants': {}}]}
|
|
Has anyone also seen Qwen3 models giving better results than API?
| 14 |
Pretty much the title. And I’m using the recommended settings. Qwen3 is insanely powerful but I can only see it through the website unfortunately :(.
| 2025-05-01T00:28:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvtoh/has_anyone_also_seen_qwen3_models_giving_better/
|
MKU64
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvtoh
| false | null |
t3_1kbvtoh
|
/r/LocalLLaMA/comments/1kbvtoh/has_anyone_also_seen_qwen3_models_giving_better/
| false | false |
self
| 14 | null |
Phi-4 reasoning and MAI-DS-R1
| 13 |
These repos haven't seen much activity, so I'm not sure many have noticed yet but Microsoft has released some reasoning versions of Phi-4.
[microsoft/Phi-4-mini-reasoning · Hugging Face](https://huggingface.co/microsoft/Phi-4-mini-reasoning)
[microsoft/Phi-4-reasoning · Hugging Face](https://huggingface.co/microsoft/Phi-4-reasoning)
[microsoft/Phi-4-reasoning-plus · Hugging Face](https://huggingface.co/microsoft/Phi-4-reasoning-plus)
They also have released [MAI-DS-R1](https://huggingface.co/microsoft/MAI-DS-R1), "a DeepSeek-R1 reasoning model that has been post-trained by the Microsoft AI team to improve its responsiveness on blocked topics and its risk profile, while maintaining its reasoning capabilities and competitive performance" ([fp8 version](https://huggingface.co/microsoft/MAI-DS-R1-FP8)). This repo has received some more attention, but I haven't seen it mentioned here.
| 2025-05-01T00:28:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvtx9/phi4_reasoning_and_maidsr1/
|
Informal_Warning_703
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvtx9
| false | null |
t3_1kbvtx9
|
/r/LocalLLaMA/comments/1kbvtx9/phi4_reasoning_and_maidsr1/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'aFwjlxEUSA5Rv1F88SshZvk3uB0dCaBNTVKkobllcsM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=108&crop=smart&auto=webp&s=4588f0773e785f6ae58fac2e15f74cfa0c7e544f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=216&crop=smart&auto=webp&s=c2389caed6686a776dadf94db30cae96d10a8d99', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=320&crop=smart&auto=webp&s=8bfcdd0b02a148805ef87af18669c01c5692db02', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=640&crop=smart&auto=webp&s=d284a558eda81316ba8a50a9f2ef7f93e5c3e073', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=960&crop=smart&auto=webp&s=f6497f8cd05b5b2c400c4ddbc254e5a2f0f8afbe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=1080&crop=smart&auto=webp&s=397c24a868e446c5fc1c4cc2b4f2e81aa53ba37b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?auto=webp&s=abf5fe989ebc506098b55f41aee56b45c03010b4', 'width': 1200}, 'variants': {}}]}
|
Hardware guidance for local LLMs
| 1 |
[removed]
| 2025-05-01T00:29:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvurp/hardware_guidance_for_local_llms/
|
waynevergoesaway
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvurp
| false | null |
t3_1kbvurp
|
/r/LocalLLaMA/comments/1kbvurp/hardware_guidance_for_local_llms/
| false | false |
self
| 1 | null |
Microsoft just released Phi 4 Reasoning (14b)
| 686 | 2025-05-01T00:32:30 |
https://huggingface.co/microsoft/Phi-4-reasoning
|
Thrumpwart
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvwsc
| false | null |
t3_1kbvwsc
|
/r/LocalLLaMA/comments/1kbvwsc/microsoft_just_released_phi_4_reasoning_14b/
| false | false | 686 |
{'enabled': False, 'images': [{'id': 'qtFRsVUHHSscVBLKk81CTXXwniwSEY09ICcAY3SjgeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=108&crop=smart&auto=webp&s=72a36ee08cc9ebaacf0b42e0836d96adfa01fd33', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=216&crop=smart&auto=webp&s=8b7383e34c8d8d73ad76b401b540961f847b1a2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=320&crop=smart&auto=webp&s=5a7ab717c7d245951dbaee99652aa5cac047d977', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=640&crop=smart&auto=webp&s=6d5d1eccafad99c7ee9d2abed2ab5c72d51e01dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=960&crop=smart&auto=webp&s=025078f1abc697b435610b0b667b187773c94df0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?width=1080&crop=smart&auto=webp&s=01bfedd96a19487c0cf5baa39041f8661ac42146', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xhbXhNzzAfOFxvXg_YIhAxSOEKRdg1clS_62I7lKK8c.jpg?auto=webp&s=35a1ed7cdd54b483e926627f69dde2689174085f', 'width': 1200}, 'variants': {}}]}
|
||
hardware for locally deployed multi-agent team clusters
| 1 |
[removed]
| 2025-05-01T00:36:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbvza8/hardware_for_locally_deployed_multiagent_team/
|
waynevergoesaway
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbvza8
| false | null |
t3_1kbvza8
|
/r/LocalLLaMA/comments/1kbvza8/hardware_for_locally_deployed_multiagent_team/
| false | false |
self
| 1 | null |
A browser extension that redacts sensitive information from your AI prompts
| 1 |
Redactifi is a browser extension designed to detect and redact sensitive information from your AI prompts. It has a built in ML model and also uses advanced pattern recognition. This means that all processing happens locally on your device - your prompts aren't sent or stored anywhere. Any thoughts/feedback would be greatly appreciated!
Check it out here:
[https://www.redactifi.com/](https://www.redactifi.com/)
And download for free here:
[https://chromewebstore.google.com/detail/hglooeolkncknocmocfkggcddjalmjoa?utm\_source=item-share-cb](https://chromewebstore.google.com/detail/hglooeolkncknocmocfkggcddjalmjoa?utm_source=item-share-cb)
| 2025-05-01T00:41:27 |
https://v.redd.it/4xouxyj1h2ye1
|
fxnnur
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbw32k
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4xouxyj1h2ye1/DASHPlaylist.mpd?a=1748652101%2CZGM2M2E2ZWIyOWYwZWE3YTA5ZDVkZGIxYWIyMDc5OThkNGIwMzRlMGI4OTFmMzg2ODJiYjJjYTRiMzJlMGQ3ZA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/4xouxyj1h2ye1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4xouxyj1h2ye1/HLSPlaylist.m3u8?a=1748652101%2CNzBlNmRiNmY4ODY3ZmQ2YjNhZTE1MTI4MTZlNmQ5YjM4YjViOTYwNmM3MDA1YWJjNjc5YTJlMzgwMDc0MTE3Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4xouxyj1h2ye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kbw32k
|
/r/LocalLLaMA/comments/1kbw32k/a_browser_extension_that_redacts_sensitive/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=108&crop=smart&format=pjpg&auto=webp&s=a766442da6c7c6866e7051e0d9a06455ee6035ab', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=216&crop=smart&format=pjpg&auto=webp&s=515b93c37871707de09f7cbf00e3f84470c25b6f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=320&crop=smart&format=pjpg&auto=webp&s=2ab89953477300778340884cf2a648d1679ce7ac', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=640&crop=smart&format=pjpg&auto=webp&s=cfd6c6b78f925b08f10ac05241331bfcdc62d7bf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=960&crop=smart&format=pjpg&auto=webp&s=73d01ac41bf25790f583e9dd107bc235410e3233', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5be206f1e4b816e9b1f41805501af1e2ac0e2279', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NTdyY3F4ajFoMnllMTU6hYKe2ehrakuefMCzVGe9Y9N5Iq-ToO2VNCJfhuoB.png?format=pjpg&auto=webp&s=f25cdc853a6f849b6b9549910e154ea5d4b84310', 'width': 1920}, 'variants': {}}]}
|
|
Method for spreading the love? -ot regex for splitting up models.
| 0 |
What's everyone's goto for figuring out what to put where? There's qwen now plus deepseek, layer sizes will vary by quant. Llama made it easy with the fixed experts.
Do you just go through the entire layer list? I'm only filling 60% of my gpu memory cribbing from people.
-ot "([0]).ffn_.*_exps.=CUDA0,([2]).ffn_.*_exps.=CUDA1,([4]).ffn_.*_exps.=CUDA2,([6]).ffn_.*_exps.=CUDA3,([8-9]|[1-9][0-9])\.ffn_.*_exps\.=CPU" \
| 2025-05-01T00:43:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbw4n7/method_for_spreading_the_love_ot_regex_for/
|
a_beautiful_rhind
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbw4n7
| false | null |
t3_1kbw4n7
|
/r/LocalLLaMA/comments/1kbw4n7/method_for_spreading_the_love_ot_regex_for/
| false | false |
self
| 0 | null |
Hardware advice for a $20-25 k local multi-GPU cluster to power RAG + multi-agent workflows
| 3 |
Hi everyone—looking for some practical hardware guidance.
# ☑️ My use-case
* Goal: stand-up a **self-funded**, on-prem cluster that can (1) act as a retrieval-augmented, multi-agent “research assistant” and (2) serve as a low-friction POC to win over leadership who are worried about cloud egress.
* Environment: academic + government research orgs. We already run limited Azure AI instances behind a “locked-down” research enclave, but I’d like something we completely own and can iterate on quickly.
* Key requirements:
* \~**10–20 T/s** generation on 7-34 B GGUF / vLLM models.
* As few moving parts as possible (I’m the sole admin).
* Ability to pivot—e.g., fine-tune, run vector DB, or shift workloads to heavier models later.
# 💰 Budget
**$20 k – $25 k (hardware only).** I can squeeze a little if the ROI is clear.
# 🧐 Options I’ve considered
|Option|Pros|Cons / Unknowns|
|:-|:-|:-|
|**2× RTX 5090** in a Threadripper box|Obvious horsepower; CUDA ecosystem|QC rumours on 5090 launch units, current street prices way over MSRP|
|**Mac Studio M3 Ultra (128 GB)** × 2|Tight CPU-GPU memory coupling, great dev experience; silent; fits budget|Scale-out limited to 2 nodes (no NVLink); orgs are Microsoft-centric so would diverge from Azure prod path|
|**Tenstorrent Blackwell / Korvo**|Power-efficient; interesting roadmap|Bandwidth looks anemic on paper; uncertain long-term support|
|**Stay in the cloud (Azure NC/H100 V5, etc.)**|Fastest path, plays well with CISO|Outbound comms from secure enclave still a non-starter for some data; ongoing OpEx vs CapEx|
# 🔧 What I’m leaning toward
Two **Mac Studio M3 Ultra** units as a portable “edge cluster” (one primary, one replica / inference-only). They hit \~50-60 T/s on 13B Q4\_K\_M in llama.cpp tests, run ollama/vLLM fine, and keep total spend ≈$16 k—leaving headroom for a 100 GbE switch + fast NVMe array.
# ❓ Questions for the hive mind
1. Is there a better GPU/CPU combo under $25 k that gives **double-precision headroom** (for future fine-tuning) yet stays < 1.0 kW total draw?
2. Experience with **early-run 5090s**—are the QC fears justified or Reddit lore?
3. Any surprisingly good **AI-centric H100 alternatives** I’ve overlooked (MI300X, Grace Hopper eval boards, etc.) that are actually shipping to individuals?
4. Tips for keeping multi-node **inference latency < 200 ms** without NVLink when sharding > 34 B models?
All feedback is welcome—benchmarks, build lists, “here’s what failed for us,” anything.
Thanks in advance!
| 2025-05-01T00:45:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbw67n/hardware_advice_for_a_2025_k_local_multigpu/
|
waynevergoesaway
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbw67n
| false | null |
t3_1kbw67n
|
/r/LocalLLaMA/comments/1kbw67n/hardware_advice_for_a_2025_k_local_multigpu/
| false | false |
self
| 3 | null |
Qwen3 30b-A3B random programing test
| 46 |
Rotating hexagon with bouncing balls inside in all glory, but how well does Qwen3 30b-A3B (Q4\_K\_XL) handle unique tasks that is made up and random? I think it does a pretty good job!
Prompt:
>In a single HTML file, I want you to do the following:
>\- In the middle of the page, there is a blue rectangular box that can rotate.
>\- Around the rectangular box, there are small red balls spawning in and flying around randomly.
>\- The rectangular box continuously aims (rotates) towards the closest ball, and shoots yellow projectiles towards it.
>\- If a ball is hit by a projectile, it disappears, and score is added.
It generated a fully functional "game" (not really a game since your don't control anything, the blue rectangular box is automatically aiming and shooting).
I then prompted the following, to make it a little bit more advanced:
>Add this:
>\- Every 5 seconds, a larger, pink ball spawns in.
>\- The blue rotating box always prioritizes the pink balls.
The result:
https://i.redd.it/5phzqk05h2ye1.gif
*(Disclaimer: I just manually changed the background color to be a be a bit darker, for more clarity)*
Considering that this model is very fast, even on CPU, I'm quite impressed that it one-shotted this small "game".
The rectangle is aiming, shooting, targeting/prioritizing the correct objects and destroying them, just as my prompt said. It also added the score accordingly.
It was thinking for about \~3 minutes and 30 seconds in total, at a speed about \~25 t/s.
| 2025-05-01T01:01:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbwhdk/qwen3_30ba3b_random_programing_test/
|
Admirable-Star7088
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbwhdk
| false | null |
t3_1kbwhdk
|
/r/LocalLLaMA/comments/1kbwhdk/qwen3_30ba3b_random_programing_test/
| false | false | 46 | null |
|
Laboro.co AI Agent automates the job hunt: 1,000 applications submitted by AI
| 0 |
H
| 2025-05-01T01:08:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbwmep/laboroco_ai_agent_automates_the_job_hunt_1000/
|
AIHawk_Founder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbwmep
| false | null |
t3_1kbwmep
|
/r/LocalLLaMA/comments/1kbwmep/laboroco_ai_agent_automates_the_job_hunt_1000/
| false | false |
self
| 0 | null |
Something better than MN-12B-Mag-Mell-R1 for rp?
| 1 |
[removed]
| 2025-05-01T01:18:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbwtay/something_better_than_mn12bmagmellr1_for_rp/
|
Antakux
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbwtay
| false | null |
t3_1kbwtay
|
/r/LocalLLaMA/comments/1kbwtay/something_better_than_mn12bmagmellr1_for_rp/
| false | false |
self
| 1 | null |
Realtime Audio Translation Options
| 4 |
With the Qwen 30B-A3B model being able to run mainly on cpu at decent speeds freeing up the GPU, does anyone know of a reasonably straightforward way to have the PC transcribe and translate a video playing in a browser (ideally, or a player if needed) at a reasonable latency?
I've tried looking into realtime whisper implementations before, but couldn't find anything that worked. Any suggestions appreciated.
| 2025-05-01T01:22:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbwvr5/realtime_audio_translation_options/
|
RabbitEater2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbwvr5
| false | null |
t3_1kbwvr5
|
/r/LocalLLaMA/comments/1kbwvr5/realtime_audio_translation_options/
| false | false |
self
| 4 | null |
More Parameters or More Thinking?
| 18 |
For a long time, **scaling up model size** was the easiest and most reliable way to improve performance. Bigger models meant better internalization of world knowledge, especially helpful on tasks like trivia QA.
More recently, we’re seeing a **second axis of scaling emerge**: increasing *test-time compute*. That means letting models **think longer**, not just *be* larger. Techniques like chain-of-thought prompting and test-time compute enable small models to perform surprisingly well—especially in reasoning-heavy tasks.
We recently explored this trade-off in a case study focusing on **quantitative spatial reasoning**, where the task is to estimate distances between objects in real-world scenes from RGB input and natural language prompts.
We found that performance gains depend heavily on **task context**: spatial reasoning is reasoning-intensive (improves most from thinking) compared to trivia QA, more knowledge-intensive (needs capacity).
Read more: [https://remyxai.substack.com/p/a-tale-of-two-scaling-laws](https://remyxai.substack.com/p/a-tale-of-two-scaling-laws)
| 2025-05-01T01:48:05 |
https://www.reddit.com/gallery/1kbxe41
|
remyxai
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbxe41
| false | null |
t3_1kbxe41
|
/r/LocalLLaMA/comments/1kbxe41/more_parameters_or_more_thinking/
| false | false | 18 | null |
|
Qwen, Granite and Llama: the alliance of bad role models
| 0 |
Llama didn't even launch a model with supposed 2T of parameters and supposed 10M of context. However, this was pure marketing error by Meta. I say this with conviction, seeing how glorified the Qwen 3 has been, a model as bad as the other Qwens, but which generated positive repercussions due to hype.
If you see: Qwen, Granite or Llama, investigate, test online, save your SSD.
| 2025-05-01T01:56:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbxjsv/qwen_granite_and_llama_the_alliance_of_bad_role/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbxjsv
| false | null |
t3_1kbxjsv
|
/r/LocalLLaMA/comments/1kbxjsv/qwen_granite_and_llama_the_alliance_of_bad_role/
| false | false |
self
| 0 | null |
Blogger: User Profile: God entered into my body, like a body. my same size. this is holy ghost baptism. my name is Bob Hickman. I love having anal-sexx with men with God iny body
| 1 | 2025-05-01T02:06:39 |
https://www.blogger.com/profile/17341363441235422222
|
Smooth_Milk3779
|
blogger.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbxr0g
| false | null |
t3_1kbxr0g
|
/r/LocalLLaMA/comments/1kbxr0g/blogger_user_profile_god_entered_into_my_body/
| false | false |
default
| 1 | null |
|
Testing chatbots for tone and humor: what's your approach?
| 6 |
I'm building some LLM apps (mostly chatbots and agents) and finding it challenging to test for personality traits beyond basic accuracy especially on making it funny for users. How do you folks test for consistent tone, appropriate humor, or emotional intelligence in your chatbots?
Manual testing is time-consuming and kind of a pain so I’m looking for some other tools or frameworks that have proven effective? Or is everyone relying on intuitive assessments?
| 2025-05-01T02:18:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbxz6r/testing_chatbots_for_tone_and_humor_whats_your/
|
ozymanidas
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbxz6r
| false | null |
t3_1kbxz6r
|
/r/LocalLLaMA/comments/1kbxz6r/testing_chatbots_for_tone_and_humor_whats_your/
| false | false |
self
| 6 | null |
Shuttle-3.5 (Qwen3 32b Finetune)
| 106 |
We are excited to introduce Shuttle-3.5, a fine-tuned version of [Qwen3 32b](https://huggingface.co/Qwen/Qwen3-32B), emulating the writing style of Claude 3 models and thoroughly trained on role-playing data.
| 2025-05-01T02:22:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kby1en/shuttle35_qwen3_32b_finetune/
|
Liutristan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kby1en
| false | null |
t3_1kby1en
|
/r/LocalLLaMA/comments/1kby1en/shuttle35_qwen3_32b_finetune/
| false | false |
self
| 106 |
{'enabled': False, 'images': [{'id': 'm3fcYO60NSJmQlBJ7c9ArPmaJvM7-wWCfDMkF19dozk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=108&crop=smart&auto=webp&s=26e78888434e3e1f9a051731ad1d240475c9274a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=216&crop=smart&auto=webp&s=4702f94235efd775822efdd43ad23d2dccc8c65a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=320&crop=smart&auto=webp&s=5416da411cadb13220b837c383524935889a6d00', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=640&crop=smart&auto=webp&s=1bdae90b21fbac70f94a318a656e41388559af30', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=960&crop=smart&auto=webp&s=3b63239fe48ec81b9e6067fccbc081afc719d3ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?width=1080&crop=smart&auto=webp&s=0f3bf384f9a2df7b409ca084e07c55c945da67fe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Pi3-dVnRWsGB2S4eiK7LI3oxvWJa6swwQ1DADpsOBtU.jpg?auto=webp&s=801b9f24d8fbbeedd35a9513bb206cbf896fe66a', 'width': 1200}, 'variants': {}}]}
|
Help getting started with local model inference (vLLM, llama.cpp) – non-Ollama setup
| 3 |
Hi,
I've seen people mention using tools like vLLM and llama.cpp for faster, true multi-GPU support with models like Qwen 3, and I'm interested in setting something up locally (not through Ollama).
However, I'm a bit lost on where to begin as someone new to this space. I attempted to set up vLLM on Windows, but had little success. The Docker route requires WSL, which has been very buggy and painfully slow for me.
If there's a solid beginner-friendly guide or thread that walks through this setup (especially for Windows users), I’d really appreciate it. Apologies if this has already been answered—my search didn’t turn up anything clear. Happy to delete this post if someone can point me in the right direction.
Thanks in advance
| 2025-05-01T02:53:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbym8v/help_getting_started_with_local_model_inference/
|
World_of_Reddit_21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbym8v
| false | null |
t3_1kbym8v
|
/r/LocalLLaMA/comments/1kbym8v/help_getting_started_with_local_model_inference/
| false | false |
self
| 3 | null |
Is there a way to improve single user throughput?
| 0 |
At the moment, im on windows. and the tasks i tend to do require being sequential because they require info from previous tasks to give a more suitable context for the next task (translation). at the moment i use llama.cpp with a 5090 with a q4 quant of qwen3 32b and get around 37tps, and im wondering if theres a different inference engine i can use to get speed things up without resorting to batched inference?
| 2025-05-01T03:02:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbys98/is_there_a_way_to_improve_single_user_throughput/
|
JustImmunity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbys98
| false | null |
t3_1kbys98
|
/r/LocalLLaMA/comments/1kbys98/is_there_a_way_to_improve_single_user_throughput/
| false | false |
self
| 0 | null |
New training method shows 80% efficiency gain: Recursive KL Divergence Optimization
| 152 | 2025-05-01T03:04:58 |
https://arxiv.org/abs/2504.21707
|
one-escape-left
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbytzk
| false | null |
t3_1kbytzk
|
/r/LocalLLaMA/comments/1kbytzk/new_training_method_shows_80_efficiency_gain/
| false | false |
default
| 152 | null |
|
a little bit disappointed with QWen3 on coding
| 0 |
30B-A3B, 235B-A22B both fails on this.
Prompt:
Write a Python program that shows 20 balls bouncing inside a spinning heptagon:
- All balls have the same radius.
- All balls have a number on it from 1 to 20.
- All balls drop from the heptagon center when starting.
- Colors are: #f8b862, #f6ad49, #f39800, #f08300, #ec6d51, #ee7948, #ed6d3d, #ec6800, #ec6800, #ee7800, #eb6238, #ea5506, #ea5506, #eb6101, #e49e61, #e45e32, #e17b34, #dd7a56, #db8449, #d66a35
- The balls should be affected by gravity and friction, and they must bounce off the rotating walls realistically. There should also be collisions between balls.
- The material of all the balls determines that their impact bounce height will not exceed the radius of the heptagon, but higher than ball radius.
- All balls rotate with friction, the numbers on the ball can be used to indicate the spin of the ball.
- The heptagon is spinning around its center, and the speed of spinning is 360 degrees per 5 seconds.
- The heptagon size should be large enough to contain all the balls.
- Do not use the pygame library; implement collision detection algorithms and collision response etc. by yourself. The following Python libraries are allowed: tkinter, math, numpy, dataclasses, typing, sys.
- All codes should be put in a single Python file.
235B-A22B with thinking enabled generates this (chat.qwen.ai):
https://reddit.com/link/1kbz8wy/video/28asuz0ta3ye1/player
| 2025-05-01T03:28:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbz8wy/a_little_bit_disappointed_with_qwen3_on_coding/
|
foldl-li
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbz8wy
| false | null |
t3_1kbz8wy
|
/r/LocalLLaMA/comments/1kbz8wy/a_little_bit_disappointed_with_qwen3_on_coding/
| false | false |
self
| 0 | null |
Qwen3 looks like the best open source model rn
| 51 |
Skip straight to the benchmarks:
[https://bestcodes.dev/blog/qwen-3-what-you-need-to-know#benchmarks-and-comparisons](https://bestcodes.dev/blog/qwen-3-what-you-need-to-know#benchmarks-and-comparisons)
| 2025-05-01T03:31:18 |
https://bestcodes.dev/blog/qwen-3-what-you-need-to-know
|
best_codes
|
bestcodes.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzafi
| false | null |
t3_1kbzafi
|
/r/LocalLLaMA/comments/1kbzafi/qwen3_looks_like_the_best_open_source_model_rn/
| false | false | 51 |
{'enabled': False, 'images': [{'id': '8UYFmLDNfdU6QbRLu9eUlishqbKzmEOSg-LXYZik28w', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?width=108&crop=smart&auto=webp&s=d7949aeb8faca78fa8699173b414232c6a241404', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?width=216&crop=smart&auto=webp&s=5af62f9465de38afada527b22e460865d8084152', 'width': 216}, {'height': 136, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?width=320&crop=smart&auto=webp&s=113778ea05a45e67770fcc63646d1ed379a608c7', 'width': 320}, {'height': 272, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?width=640&crop=smart&auto=webp&s=816f13a44f0d85ce1c2c573df2697bf797f99e11', 'width': 640}, {'height': 408, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?width=960&crop=smart&auto=webp&s=ac7246aa9487df93e6a265e9be90065923d5171f', 'width': 960}], 'source': {'height': 426, 'url': 'https://external-preview.redd.it/BJTDcc3wZisa9FHhyuJET5Gmo6jW1eGtg4uh2WUEhGA.jpg?auto=webp&s=3a516fba7d03ab25ead7aa311aa64b7edb184446', 'width': 1000}, 'variants': {}}]}
|
|
New/Updated Quants silently released for all Qwen3 models by Unsloth an hour ago.
| 1 |
[removed]
| 2025-05-01T03:34:48 |
NewLeaf2025
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzcjj
| false | null |
t3_1kbzcjj
|
/r/LocalLLaMA/comments/1kbzcjj/newupdated_quants_silently_released_for_all_qwen3/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '_p7s27EnlPrNjRyLKwkHnZvI8y9O9kaLqeJzGR91uZM', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/roekhj6hb3ye1.png?width=108&crop=smart&auto=webp&s=19714f5cfc1b336ec7eb8630f6e9e45a05c65278', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/roekhj6hb3ye1.png?width=216&crop=smart&auto=webp&s=dcc63d33e4abcaf94804fedb6e812a890ffb3431', 'width': 216}, {'height': 352, 'url': 'https://preview.redd.it/roekhj6hb3ye1.png?width=320&crop=smart&auto=webp&s=1fc6661c04a78a5d89d9700d89151c4e2ae537db', 'width': 320}, {'height': 705, 'url': 'https://preview.redd.it/roekhj6hb3ye1.png?width=640&crop=smart&auto=webp&s=28619362e5c9e0a81e301aa2a1d1440f2b73ee8d', 'width': 640}], 'source': {'height': 953, 'url': 'https://preview.redd.it/roekhj6hb3ye1.png?auto=webp&s=072a5ab2d877db4c4da44d2330cd79a65ba87e0a', 'width': 865}, 'variants': {}}]}
|
||
What specs do I need to run LLaMA at home?
| 0 |
I want to use it (and possibly another very small LLM in tandem) to build an experimental AI bot on my local PC. What do I need?
| 2025-05-01T03:42:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbzhdw/what_specs_do_i_need_to_run_llama_at_home/
|
doctordaedalus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzhdw
| false | null |
t3_1kbzhdw
|
/r/LocalLLaMA/comments/1kbzhdw/what_specs_do_i_need_to_run_llama_at_home/
| false | false |
self
| 0 | null |
I rebuilt ≈98% of GPT-4-o3’s behavioral policy via black-box reconstruction — here’s the open blueprint (v0.5-public, CC BY-NC 4.0) [Medium+PDF]
| 1 | 2025-05-01T03:56:33 |
https://medium.com/@cortexos.main/hf-abstract-o3-proxy-v0-5-a-98-behaviour-mirror-of-gpt-4-o3-89d67fdc1f8a
|
Various_Story8026
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzq0x
| false | null |
t3_1kbzq0x
|
/r/LocalLLaMA/comments/1kbzq0x/i_rebuilt_98_of_gpt4o3s_behavioral_policy_via/
| false | false |
default
| 1 | null |
|
Qwen3-14B vs Phi-4-reasoning-plus
| 1 |
[removed]
| 2025-05-01T03:59:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kbzrx4/qwen314b_vs_phi4reasoningplus/
|
Ni_Guh_69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzrx4
| false | null |
t3_1kbzrx4
|
/r/LocalLLaMA/comments/1kbzrx4/qwen314b_vs_phi4reasoningplus/
| false | false |
self
| 1 | null |
no
| 1 | 2025-05-01T03:59:49 |
https://www.notion.so/Cover-Page-Project-Rebirth-1d4572bebc2f8085ad3df47938a1aa1f
|
Various_Story8026
|
notion.so
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzrzu
| false | null |
t3_1kbzrzu
|
/r/LocalLLaMA/comments/1kbzrzu/no/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'prjsPaM0TJBzFK2g_gZt00y3CPDJ-Vnne_qXSP7SGdk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=108&crop=smart&auto=webp&s=8e004bd1fdbb5d592aba72a78655ec0e1727c56b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=216&crop=smart&auto=webp&s=306aadd667f436d6f118e7be1d4e0db349f323ad', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=320&crop=smart&auto=webp&s=da61f561a5fd2d1333e84cd17ab3a318fcb2190b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=640&crop=smart&auto=webp&s=cc0d13b268350ec2673f2f59111ae4a4349f8d0d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=960&crop=smart&auto=webp&s=14967b780256221c640af056a4e8fb1bb2fd7a6a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?width=1080&crop=smart&auto=webp&s=5e289ffeff4e760c9fb3fd7cdd3b063b2f5c3800', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8fmxteKt_j_zT-r2eVVBWbIVOkJJ4NrW5QJ66N9ro3w.jpg?auto=webp&s=4bc62653c17309bc017fc54c651e769c25418d4b', 'width': 1200}, 'variants': {}}]}
|
||
My experience in this subreddit:
| 0 | 2025-05-01T04:04:50 |
Pro-editor-1105
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzvk1
| false | null |
t3_1kbzvk1
|
/r/LocalLLaMA/comments/1kbzvk1/my_experience_in_this_subreddit/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'lSMQZ_GU5S-bXAbMUy5RvHTpSQfLc9gZhli1XREKb3c', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?width=108&crop=smart&auto=webp&s=36ecd7ae1dd43e8e68376fd6717534085e1432ca', 'width': 108}, {'height': 70, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?width=216&crop=smart&auto=webp&s=30bb7188eff5c40fc1c6d6de09d52dfdde04797a', 'width': 216}, {'height': 103, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?width=320&crop=smart&auto=webp&s=87dc2364949dd6e6d7337c40ab43527355ec4418', 'width': 320}, {'height': 207, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?width=640&crop=smart&auto=webp&s=3ccc36d34d55749c43799c638a4b625eee9a054e', 'width': 640}, {'height': 311, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?width=960&crop=smart&auto=webp&s=17aa8dce88c5c82175d5d4e91d86e5d199b5004b', 'width': 960}], 'source': {'height': 328, 'url': 'https://preview.redd.it/uqp0aktch3ye1.png?auto=webp&s=c970c8ababa2c0f868a1daf3604a352a3e35c4a8', 'width': 1010}, 'variants': {}}]}
|
|||
I made JSON schema types for AI vendors, and converter of them for function calling, including OpenAPI.
| 14 |
https://github.com/samchon/openapi
I investigated Swagger/OpenAPI and the AI function calling schema for each AI vendor, defined types, and prepared a transformer that can be converted between them.
The JSON schema definition of AI function calling is different for each AI vendor. This is the same in MCP, so if you want to create a function calling application that can be used universally across all AI vendors, you need a converter like the [`@samchon/openapi`](https://github.com/samchon/openapi) I created.
Also, if you're considering AI function calling to Swagger/OpenAPI server, my open source library [`@samchon/openapi`](https://github.com/samchon/openapi) would be helpful than any other libraries.
| 2025-05-01T04:05:36 |
jhnam88
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kbzw0p
| false | null |
t3_1kbzw0p
|
/r/LocalLLaMA/comments/1kbzw0p/i_made_json_schema_types_for_ai_vendors_and/
| false | false | 14 |
{'enabled': True, 'images': [{'id': 'D0ZGTxn0HxOutsEOAMW0fY48UoJUwGWIyvGPN4h8Z2g', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=108&crop=smart&auto=webp&s=f155a13cc373cb3a29f56f519bd46cb4c532e0ef', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=216&crop=smart&auto=webp&s=2b4c037fb70f1dcc59fb1ade285227e8efcae0c8', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=320&crop=smart&auto=webp&s=2df612568dd8b78c52c71c1ade4c55dd59b88163', 'width': 320}, {'height': 553, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=640&crop=smart&auto=webp&s=ce7c9d1631d234f7cc2b962af6fc604067857d15', 'width': 640}, {'height': 830, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=960&crop=smart&auto=webp&s=5b9c62bb557ca2eaf242c29b046dd3f1d96aa0ec', 'width': 960}, {'height': 933, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?width=1080&crop=smart&auto=webp&s=f73aa776c9694dfc2353c50395e0c24bfc387ae7', 'width': 1080}], 'source': {'height': 991, 'url': 'https://preview.redd.it/vic37o6dg3ye1.png?auto=webp&s=770c05a803a6ed41ae2e5c5b0d7c38e0e1f6a278', 'width': 1146}, 'variants': {}}]}
|
||
Qwen 3 4B is the future, ladies and gentlemen
| 402 | 2025-05-01T04:13:59 |
Pro-editor-1105
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc016i
| false | null |
t3_1kc016i
|
/r/LocalLLaMA/comments/1kc016i/qwen_3_4b_is_the_future_ladies_and_gentlemen/
| false | false | 402 |
{'enabled': True, 'images': [{'id': 'ydDKCxXvGEyxD_Fn145jJxEEhyjmrlivbuhGwSo47RE', 'resolutions': [{'height': 26, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=108&crop=smart&auto=webp&s=bc8b1fc65c4a9d35867b3b7e14f222d85da83189', 'width': 108}, {'height': 53, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=216&crop=smart&auto=webp&s=612c23fd468ec836ed98ca438e4852ae060ad504', 'width': 216}, {'height': 79, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=320&crop=smart&auto=webp&s=48c5b121c6a2c4350129db0abb806c0d193951aa', 'width': 320}, {'height': 158, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=640&crop=smart&auto=webp&s=b8eda43a8fad38e5678490b24666416e01ee74af', 'width': 640}, {'height': 238, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=960&crop=smart&auto=webp&s=5f06ad97ecf573c722c67e1bb1b7aa122aafbdb3', 'width': 960}, {'height': 267, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?width=1080&crop=smart&auto=webp&s=4d1f23c72692453a157d51690d1b3cea994c76ab', 'width': 1080}], 'source': {'height': 425, 'url': 'https://preview.redd.it/2aw947hyi3ye1.png?auto=webp&s=d0352fd04f50b49eb9a6288524cd35059b819cbc', 'width': 1714}, 'variants': {}}]}
|
|||
What’s the coolest/funniest/most intricate thing(s) you’ve built with LLMs? I'm starting a podcast and would love talking to you for an episode!
| 3 |
I’m putting together a no-BS show called “The Coolest Thing You’ve Done with LLMs and GPTs”. Basically I want to just talk to other people who have been experimenting with this stuff for a while now, even before it blew up. I want to have conversations that are just about the genuinely useful things people are building with LLMS and GPT and the like. And casual, too.
Anyone using Ai in ways that are really clever, intricate, ridiculously funny, super helpful.... the works. It's all fair game! Reach out if you would want to do an episode with me to get this going! Thanks.
| 2025-05-01T04:18:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc03pq/whats_the_coolestfunniestmost_intricate_things/
|
riv3r1andstr3ams
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc03pq
| false | null |
t3_1kc03pq
|
/r/LocalLLaMA/comments/1kc03pq/whats_the_coolestfunniestmost_intricate_things/
| false | false |
self
| 3 | null |
Open Source AI Server May 2025 Update
| 0 | 2025-05-01T04:22:50 |
https://servicestack.net/posts/ai-server-2025-05
|
mythz
|
servicestack.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc06hx
| false | null |
t3_1kc06hx
|
/r/LocalLLaMA/comments/1kc06hx/open_source_ai_server_may_2025_update/
| false | false |
default
| 0 | null |
|
I'm confused with this sub
| 1 |
[removed]
| 2025-05-01T04:34:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc0dc3/im_confused_with_this_sub/
|
Due-Employee4744
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc0dc3
| false | null |
t3_1kc0dc3
|
/r/LocalLLaMA/comments/1kc0dc3/im_confused_with_this_sub/
| false | false |
self
| 1 | null |
EasyWhisperUI – Fast, Open Source, and Free Whisper UI for Windows & macOS
| 75 |
Hey guys, if you're looking for a fast, open source, and completely free UI for Whisper, please consider trying my app **EasyWhisperUI**.
It features full cross platform GPU acceleration:
* **Vulkan** on Windows
* **Metal** on macOS
I added several new changes added recently:
1. **macOS Support** • Full build and runtime support for macOS • Thanks to celerycoloured on GitHub for the contribution (user request)
2. **Batch Processing** • Drag & drop multiple files • Automatically queues and transcribes them one by one (user request)
3. **Major UI Enhancements (Windows)** • Acrylic background for a translucent, modern look • Improved layout and spacing
4. **CPU-Only Toggle Support** • Option to disable GPU acceleration and run purely on CPU (user request)
There are a lot more features, please check the GitHub for more info:
🔗 **GitHub:** [https://github.com/mehtabmahir/easy-whisper-ui](https://github.com/mehtabmahir/easy-whisper-ui)
Let me know what you think or if you have any suggestions!
| 2025-05-01T04:39:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc0gbp/easywhisperui_fast_open_source_and_free_whisper/
|
mehtabmahir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc0gbp
| false | null |
t3_1kc0gbp
|
/r/LocalLLaMA/comments/1kc0gbp/easywhisperui_fast_open_source_and_free_whisper/
| false | false |
self
| 75 |
{'enabled': False, 'images': [{'id': 'edkC_cl7JtDL6tAGDgOzkwaUQqaUEGRdFSM3indNjuQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=108&crop=smart&auto=webp&s=35cb87df6245f9ae1462418ef5c1eaf110e5be5b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=216&crop=smart&auto=webp&s=94f10a91ad2284f62f337482eb5260357b1b21d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=320&crop=smart&auto=webp&s=5c73e9f813c3ddf92b75bc5328a2f4dd8179a016', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=640&crop=smart&auto=webp&s=ffce2021d02c8dcd1269793c88d13929479aebbd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=960&crop=smart&auto=webp&s=044a3d73ba908879f40c8bf166251eb33487081f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?width=1080&crop=smart&auto=webp&s=3130614c42a176f6d0afa4ca929a17fcc65bc17c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TH_Asax-C4yGQxKFkfmKZCUdskf1YDVASECelGE7xpQ.jpg?auto=webp&s=7cec718bb93d18cd98a236d67fa155af18241b0c', 'width': 1200}, 'variants': {}}]}
|
Stop wasting $ on AI monthly subscriptions ! Access ChatGPT+, Gemini, Claude & more. Pay only for use.
| 1 | 2025-05-01T05:03:57 |
https://genai-all.com
|
Suspicious_Aioli6629
|
genai-all.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc0uf9
| false | null |
t3_1kc0uf9
|
/r/LocalLLaMA/comments/1kc0uf9/stop_wasting_on_ai_monthly_subscriptions_access/
| false | false |
default
| 1 | null |
|
Setting up Llama 3.2 inference on low-resource hardware
| 2 |
After successfully fine-tuning Llama 3.2, I'm now tackling the inference implementation.
I'm working with a 16GB RAM laptop and need to create a pipeline that integrates Grobid, SciBERT, FAISS, and Llama 3.2 (1B-3B parameter version). My main question is: what's the most efficient way to run Llama inference on a CPU-only machine? I need to feed FAISS outputs into Llama and display results through a web UI.
Additionally, can my current hardware handle running all these components simultaneously, or should I consider renting a GPU-equipped machine instead?
Thank u all.
| 2025-05-01T05:05:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc0vj8/setting_up_llama_32_inference_on_lowresource/
|
ChimSau19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc0vj8
| false | null |
t3_1kc0vj8
|
/r/LocalLLaMA/comments/1kc0vj8/setting_up_llama_32_inference_on_lowresource/
| false | false |
self
| 2 | null |
We crossed the line
| 890 |
For the first time, QWEN3 32B solved all my coding problems that I usually rely on either ChatGPT or Grok3 best thinking models for help. Its powerful enough for me to disconnect internet and be fully self sufficient. We crossed the line where we can have a model at home that empower us to build anything we want.
Thank you soo sooo very much QWEN team !
| 2025-05-01T05:14:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc10hz/we_crossed_the_line/
|
DrVonSinistro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc10hz
| false | null |
t3_1kc10hz
|
/r/LocalLLaMA/comments/1kc10hz/we_crossed_the_line/
| false | false |
self
| 890 | null |
Stop wasting $ on AI Subscriptions, start using https://genai-all.com site to access all premium models from OpenAI, Anthropic Claude, Gemini, Perplexity, Grok..etc. Pay only for what you use, start with free $2 credit.
| 0 | 2025-05-01T05:22:08 |
https://genai-all.com
|
Flaky_Coach
|
genai-all.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc14n7
| false | null |
t3_1kc14n7
|
/r/LocalLLaMA/comments/1kc14n7/stop_wasting_on_ai_subscriptions_start_using/
| false | false |
default
| 0 | null |
|
A model that knows about philosophy... and works on my PC?
| 3 |
I usually read philosophy books, and I've noticed that, for example, Deepseek R1 is quite good, obviously with limitations, but... quite good for concepts.
xxxxxxx@fedora:~$ free -h
total used free shared buff/cache available
Mem: 30Gi 4,0Gi 23Gi 90Mi 3,8Gi
Model: RTX 4060 Ti
Memory: 8 GB
CUDA: Activado (versión 12.8).
Considering the technical limitations of my PC. What LLM could I use? Are there any that are geared toward this type of topic?
(e.g., authors like Anselm Jappe, which is what I've been reading lately)
| 2025-05-01T05:25:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc16fj/a_model_that_knows_about_philosophy_and_works_on/
|
9acca9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc16fj
| false | null |
t3_1kc16fj
|
/r/LocalLLaMA/comments/1kc16fj/a_model_that_knows_about_philosophy_and_works_on/
| false | false |
self
| 3 | null |
Will Sam Altman Drop an Open-Source model This Week?
| 0 |
I guess yes.
| 2025-05-01T05:28:15 |
PumpkinNarrow6339
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc17zh
| false | null |
t3_1kc17zh
|
/r/LocalLLaMA/comments/1kc17zh/will_sam_altman_drop_an_opensource_model_this_week/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'tmJJlx7EFuCcADEerPsswbSkaqGzeUA-iE4HOwvPgEU', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?width=108&crop=smart&auto=webp&s=f64be57d3ca42e04c3f55771c0a230a370eb9735', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?width=216&crop=smart&auto=webp&s=5c9e80c9c92efbb4ec28693ee430c008d499afab', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?width=320&crop=smart&auto=webp&s=06c103783a5869e7ccd8f8937d84a6b53651265e', 'width': 320}, {'height': 420, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?width=640&crop=smart&auto=webp&s=c37614d74f6a8ee1c02ef6ee1dfccdfecb21eb2f', 'width': 640}, {'height': 630, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?width=960&crop=smart&auto=webp&s=091827f63b56b51c67e95b8601bb8c751f5c6b5e', 'width': 960}], 'source': {'height': 695, 'url': 'https://preview.redd.it/q33ika09w3ye1.png?auto=webp&s=600a00fa33a854a16cb7e9da9c7181bdf3dacfbc', 'width': 1058}, 'variants': {}}]}
|
||
what does AIO mean?
| 1 |
[removed]
| 2025-05-01T05:32:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc1ad6/what_does_aio_mean/
|
Novacane512
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc1ad6
| false | null |
t3_1kc1ad6
|
/r/LocalLLaMA/comments/1kc1ad6/what_does_aio_mean/
| false | false |
self
| 1 | null |
Model load times?
| 4 |
How long does it takes to load some of your models from disk? Qwen3:235b is my largest model so far and it clocks in at 2 minutes and 23 seconds to load into memory from a SAS SSD array. Wondering if this is on the faster or slower end compared with other setups. Another model is 70B Deepseek which takes 45 seconds on my system. Curious what y'all get.
| 2025-05-01T05:41:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc1fbp/model_load_times/
|
zachsandberg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc1fbp
| false | null |
t3_1kc1fbp
|
/r/LocalLLaMA/comments/1kc1fbp/model_load_times/
| false | false |
self
| 4 | null |
move 37 energy, deepseek prover v2
| 44 |
https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/DeepSeek_Prover_V2.pdf
| 2025-05-01T05:52:49 |
o5mini
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc1l53
| false | null |
t3_1kc1l53
|
/r/LocalLLaMA/comments/1kc1l53/move_37_energy_deepseek_prover_v2/
| false | false | 44 |
{'enabled': True, 'images': [{'id': '39Wc5-EKNkeYZsc_uUblN4_x4M4NhX0oJsONHW9xiIc', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=108&crop=smart&auto=webp&s=95d74c3ea2f1a4d6c45013cefd10a91a78d32dfa', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=216&crop=smart&auto=webp&s=f721e7fb36967bead22d638e4c9e7d09502a2efb', 'width': 216}, {'height': 112, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=320&crop=smart&auto=webp&s=c3bcd897f4c72aeadab47667f1da606506edb471', 'width': 320}, {'height': 224, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=640&crop=smart&auto=webp&s=ee552ea6c411ff3966e6a84742b5ed41fd3966ab', 'width': 640}, {'height': 337, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=960&crop=smart&auto=webp&s=f713548c07e90b240361866d2ec80ab351cddfe5', 'width': 960}, {'height': 379, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?width=1080&crop=smart&auto=webp&s=32ecd4df6778211549014a13e8f5c50d8dc1108f', 'width': 1080}], 'source': {'height': 644, 'url': 'https://preview.redd.it/8fqy0a6n04ye1.jpeg?auto=webp&s=fb63a244b88f92437e0d70e18fd6c9e46c041e27', 'width': 1832}, 'variants': {}}]}
|
||
M3 ultra with 512 GB is worth to buy for running local "Wise" AI?
| 4 |
Is there a point in having a mac with so much ram? I would count on running local AI but I don't know what level I can count on
| 2025-05-01T06:17:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc1yb4/m3_ultra_with_512_gb_is_worth_to_buy_for_running/
|
CacheConqueror
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc1yb4
| false | null |
t3_1kc1yb4
|
/r/LocalLLaMA/comments/1kc1yb4/m3_ultra_with_512_gb_is_worth_to_buy_for_running/
| false | false |
self
| 4 | null |
Qwen3 has dealt with the hallucination problem in some way.
| 96 |
Below is my standard memory test involving famous French and German poems that I always perform whenever a new model is released. When tested on the Qwen3 235B model, as expected, it failed to memorize them.
According to my estimation, memorizing famous poetry written in non-English European languages generally requires a mid-to-large-sized model—approximately beyond 600B parameters. Moreover, Qwen3 235B, which only has about a third of the minimum requirement, is also a Mixture-of-Experts (MoE) model that doesn’t use the entire parameters at once, thus having relatively weaker knowledge compared to regular dense 235B models. Therefore, I initially assumed this model would try to hallucinate and fabricate the poem. Surprisingly, however, it didn’t invent or speak any made-up content at all.
Instead, the model speculated that the poem I requested might be featured in Baudelaire’s *Les Fleurs du mal*, but determined that it could not recite it because it lacked the exact text. It only provided related information clearly and transparently. LLM models distinguishing their own knowledge boundaries like this is usually a difficult feat, even among other cutting-edge, large-scale, private models. I suspect this capability is likely attributable to the inference engine commonly employed across Alibaba’s language models.
Below is the answer provided by the Qwen3 235B model when asked to recite a Baudelaire poem:
* “Le Crépuscule du soir” (“Evening Twilight”) is not one of the most famous poems included in Charles Baudelaire’s *Les Fleurs du mal*, and there may be some confusion regarding its exact title or content. Baudelaire wrote a poem titled “Le Crépuscule du matin” (“Morning Twilight”) that appears in his collection, yet there isn’t a widely recognized poem titled specifically “Le Crépuscule du soir” in the standard edition of his works.
* However, if you were referring to another poem containing evening imagery, or would like me to recite other well-known poems from *Les Fleurs du mal*, such as “L’Albatros” (“The Albatross”), “Au Lecteur” (“To the Reader”), “Spleen”, or “La Mort du Pauvre” (“The Death of the Poor”), I would gladly recite these for you.
* If you recall the first few lines or have more detailed information, I can assist you further by verifying or translating the exact poem, or by providing interpretations of his works. Shall I instead recite one of Baudelaire’s more famous poems?
Here is Qwen3 235B’s reply to a request about Friedrich Schiller’s poem “Handschuh” (“The Glove”):
* The poem titled “Handschuh” (“The Glove”) is not clearly listed among the major works of Friedrich Schiller. Schiller is a significant figure in German literary history, best known for renowned poems such as “Ode an die Freude” (“Ode to Joy”), “Das Lied von der Glocke” (“Song of the Bell”), and dramas such as “Die Jungfrau von Orleans” (“The Maid of Orleans”) and “Maria Stuart” (“Mary Stuart”). However, a poem called “Handschuh” does not appear commonly recorded or generally acknowledged in relation to Schiller.
* You might wish to double-check if the title was remembered incorrectly or if it could perhaps be confused with another author’s work. It might, for example, have been a piece by Johann Wolfgang von Goethe or another contemporary poet. Alternatively, it might be an excerpt, a passage, or a scene taken from Schiller’s essays, epic poems, or dramas.
* If you require information, interpretation, or excerpts of Schiller’s major poems or other famous works, such as “Ode to Joy,” speeches, or dramatic dialogues, please let me know. I’d be happy to help.
| 2025-05-01T06:19:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc1zgb/qwen3_has_dealt_with_the_hallucination_problem_in/
|
rockethumanities
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc1zgb
| false | null |
t3_1kc1zgb
|
/r/LocalLLaMA/comments/1kc1zgb/qwen3_has_dealt_with_the_hallucination_problem_in/
| false | false |
self
| 96 | null |
Foundation-Sec-8B Released (Cisco's Security-Focused Base Model)
| 1 |
[removed]
| 2025-05-01T06:44:52 |
https://huggingface.co/fdtn-ai/Foundation-Sec-8B
|
Acceptable_Zombie136
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc2ccx
| false | null |
t3_1kc2ccx
|
/r/LocalLLaMA/comments/1kc2ccx/foundationsec8b_released_ciscos_securityfocused/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'xQsYt0pu7lZz71pKGFGvvuybG7Pe5rj5IENzv2SNzzc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=108&crop=smart&auto=webp&s=957350135bda5682b7fc2ffcb58215bbbf10d359', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=216&crop=smart&auto=webp&s=e3792bf1d402240ef50b47ca7f7f0de09a262056', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=320&crop=smart&auto=webp&s=dafb2b5addf524d3271893ee07e0929e9a4b9236', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=640&crop=smart&auto=webp&s=59b54b351bd9bf54e04a3f9b5e9aea16698e43f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=960&crop=smart&auto=webp&s=e3ea1063032765152eab852452414ee9e6576cf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=1080&crop=smart&auto=webp&s=8d94a77915abea07999d2f9ddf3ddc5d3f9ac73d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?auto=webp&s=892c6974a34702d6da9269bf6a68f972bdf1187d', 'width': 1200}, 'variants': {}}]}
|
|
Amoral Qwen 3
| 2 |
Model: [soob3123/amoral-qwen3-14B · Hugging Face](https://huggingface.co/soob3123/amoral-qwen3-14B)
GGUFs: [soob3123/amoral-qwen3-14B-GGUF · Hugging Face](https://huggingface.co/soob3123/amoral-qwen3-14B-GGUF)
Still working on 4B, 8B and 32B.... maybe they'll be done tmr, qwen 3 models are a little unpredictable compared to the gemma 3 finetunes for some reason.
Do you all think it would be worth it to do an amoral finetune for the really small models? 1.7B and 0.6B?
Hope yall like it!
| 2025-05-01T06:54:47 |
Reader3123
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc2hap
| false | null |
t3_1kc2hap
|
/r/LocalLLaMA/comments/1kc2hap/amoral_qwen_3/
| false | false | 2 |
{'enabled': True, 'images': [{'id': 'Bfo62xmFJJbVtdLU__TqbaUxb7ZavsxsjmMoKed244M', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/feovgy62b4ye1.png?width=108&crop=smart&auto=webp&s=046516bfde68fe67831de93cf7dc50c9b7902de6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/feovgy62b4ye1.png?width=216&crop=smart&auto=webp&s=5533ca93238ab3f8c43dce72a4d858cb5d714262', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/feovgy62b4ye1.png?width=320&crop=smart&auto=webp&s=04d883690c988f1db012ebdd9e4bef12e1acba2e', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/feovgy62b4ye1.png?width=640&crop=smart&auto=webp&s=e95a0fd2734e18999677fa55fa0d3e13a0d5843a', 'width': 640}], 'source': {'height': 640, 'url': 'https://preview.redd.it/feovgy62b4ye1.png?auto=webp&s=618de5541b0b5c308fb38ebf76885c24588972b2', 'width': 640}, 'variants': {}}]}
|
||
Phi-4-mini-reasoning 3.8B
| 62 |
| Model | AIME | MATH-500 | GPQA Diamond |
|------------------------------------|-------|----------|--------------|
| o1-mini* | 63.6 | 90.0 | 60.0 |
| DeepSeek-R1-Distill-Qwen-7B | 53.3 | 91.4 | 49.5 |
| DeepSeek-R1-Distill-Llama-8B | 43.3 | 86.9 | 47.3 |
| Bespoke-Stratos-7B* | 20.0 | 82.0 | 37.8 |
| OpenThinker-7B* | 31.3 | 83.0 | 42.4 |
| Llama-3.2-3B-Instruct | 6.7 | 44.4 | 25.3 |
| Phi-4-Mini (base model, 3.8B) | 10.0 | 71.8 | 36.9 |
|**Phi-4-mini-reasoning (3.8B)** | **57.5** | **94.6** | **52.0** |
[https://huggingface.co/microsoft/Phi-4-mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning)
| 2025-05-01T07:07:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc2o97/phi4minireasoning_38b/
|
magnus-m
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc2o97
| false | null |
t3_1kc2o97
|
/r/LocalLLaMA/comments/1kc2o97/phi4minireasoning_38b/
| false | false |
self
| 62 |
{'enabled': False, 'images': [{'id': 'aFwjlxEUSA5Rv1F88SshZvk3uB0dCaBNTVKkobllcsM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=108&crop=smart&auto=webp&s=4588f0773e785f6ae58fac2e15f74cfa0c7e544f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=216&crop=smart&auto=webp&s=c2389caed6686a776dadf94db30cae96d10a8d99', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=320&crop=smart&auto=webp&s=8bfcdd0b02a148805ef87af18669c01c5692db02', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=640&crop=smart&auto=webp&s=d284a558eda81316ba8a50a9f2ef7f93e5c3e073', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=960&crop=smart&auto=webp&s=f6497f8cd05b5b2c400c4ddbc254e5a2f0f8afbe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?width=1080&crop=smart&auto=webp&s=397c24a868e446c5fc1c4cc2b4f2e81aa53ba37b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yvwSjXCQBx9kEstIykyzUlojNud0JPM85uMgOfqIL98.jpg?auto=webp&s=abf5fe989ebc506098b55f41aee56b45c03010b4', 'width': 1200}, 'variants': {}}]}
|
Qwen3 on Hallucination Leaderboard
| 47 |
[https://github.com/vectara/hallucination-leaderboard](https://github.com/vectara/hallucination-leaderboard)
>**Qwen3-0.6B, 1.7B, 4B, 8B, 14B, 32B** are accessed via Hugging Face's checkpoints with `enable_thinking=False`
https://preview.redd.it/kaofid15z3ye1.png?width=835&format=png&auto=webp&s=f83449a92f4c9f60782b954a7f62cdeaafc7c34c
https://preview.redd.it/zu9zkwikd4ye1.png?width=1421&format=png&auto=webp&s=875c54e57543d8427ad7003addfd80954e9b427a
| 2025-05-01T07:07:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc2oag/qwen3_on_hallucination_leaderboard/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc2oag
| false | null |
t3_1kc2oag
|
/r/LocalLLaMA/comments/1kc2oag/qwen3_on_hallucination_leaderboard/
| false | false | 47 |
{'enabled': False, 'images': [{'id': 'mV9m1WVNfBGpkhjD0OaMd6c1Z0yiloyqvsV7MIkD6sA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=108&crop=smart&auto=webp&s=6d69944d949aaed1767b7b64455aeac28f8521c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=216&crop=smart&auto=webp&s=f2b591bf030f115b270caf5058442be231a3bd9c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=320&crop=smart&auto=webp&s=44525f1d9bb73381aa579ebf35d810e16024b131', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=640&crop=smart&auto=webp&s=1b30efd1d8f043842a95084bba8291aed9e8ac33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=960&crop=smart&auto=webp&s=084d1818d994a7ac7556a7cbdda594f99d48696e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?width=1080&crop=smart&auto=webp&s=7e73924bac889fb0b0474767e476ee10b80487a6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H3IKQNnLMdUNogXn2UuvXz7hC9c0EuTPj6r7HC9RE5s.jpg?auto=webp&s=dc831105888d5ca5d5421422bdec09a4cefb8c8a', 'width': 1200}, 'variants': {}}]}
|
|
Which is better Qwen 3 4b with thinking or Qwen 3 8B without thinking?
| 7 |
I haven't found comparisons between thinking and non thinking performance. But it does make me wonder how performance changes with computer when comparing across sizes.
| 2025-05-01T07:08:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc2oni/which_is_better_qwen_3_4b_with_thinking_or_qwen_3/
|
buildmine10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc2oni
| false | null |
t3_1kc2oni
|
/r/LocalLLaMA/comments/1kc2oni/which_is_better_qwen_3_4b_with_thinking_or_qwen_3/
| false | false |
self
| 7 | null |
[HELP] LM Studio server is 2x faster than Llama.cpp server for Orpheus TTS streaming using the same model. Why?
| 1 |
[removed]
| 2025-05-01T07:55:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3bir/help_lm_studio_server_is_2x_faster_than_llamacpp/
|
NOTTHEKUNAL
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3bir
| false | null |
t3_1kc3bir
|
/r/LocalLLaMA/comments/1kc3bir/help_lm_studio_server_is_2x_faster_than_llamacpp/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
Best Hardware for Qwen3-30B-A3B CPU Inference?
| 1 |
[removed]
| 2025-05-01T07:57:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3cf7/best_hardware_for_qwen330ba3b_cpu_inference/
|
ColdImplement1319
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3cf7
| false | null |
t3_1kc3cf7
|
/r/LocalLLaMA/comments/1kc3cf7/best_hardware_for_qwen330ba3b_cpu_inference/
| false | false |
self
| 1 | null |
Any research/methods to train models to avoid slop phrasings and repetitions?
| 1 |
[removed]
| 2025-05-01T08:09:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3icr/any_researchmethods_to_train_models_to_avoid_slop/
|
LagOps91
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3icr
| false | null |
t3_1kc3icr
|
/r/LocalLLaMA/comments/1kc3icr/any_researchmethods_to_train_models_to_avoid_slop/
| false | false |
self
| 1 | null |
Are there any known/effective ways to post-train/finetune models to reduce slop and repetitions?
| 1 |
[removed]
| 2025-05-01T08:11:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3j6q/are_there_any_knowneffective_ways_to/
|
LagOps91
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3j6q
| false | null |
t3_1kc3j6q
|
/r/LocalLLaMA/comments/1kc3j6q/are_there_any_knowneffective_ways_to/
| false | false |
self
| 1 | null |
I'm looking to hire someone who can teach some college professor on basic ai.
| 6 |
I have made a syllabus framework and we have a budget to pay you. Can anyone help us learn ai so we can teach our students.
| 2025-05-01T08:11:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3jby/im_looking_to_hire_someone_who_can_teach_some/
|
sanjeev-v
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3jby
| false | null |
t3_1kc3jby
|
/r/LocalLLaMA/comments/1kc3jby/im_looking_to_hire_someone_who_can_teach_some/
| false | false |
self
| 6 | null |
Ways to post-train/finetune models to avoid slop/repetitions?
| 1 |
[removed]
| 2025-05-01T08:13:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3ki4/ways_to_posttrainfinetune_models_to_avoid/
|
LagOps91
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3ki4
| false | null |
t3_1kc3ki4
|
/r/LocalLLaMA/comments/1kc3ki4/ways_to_posttrainfinetune_models_to_avoid/
| false | false |
self
| 1 | null |
Large Language Models with One Training Example
| 5 |
Paper: [https://www.alphaxiv.org/abs/2504.20571](https://www.alphaxiv.org/abs/2504.20571)
Code: [https://github.com/ypwang61/One-Shot-RLVR](https://github.com/ypwang61/One-Shot-RLVR)
We show that reinforcement learning with verifiable reward using one training example (1-shot RLVR) is effective in incentivizing the mathematical reasoning capabilities of large language models (LLMs). Applying RLVR to the base model Qwen2.5-Math-1.5B, we identify a single example that elevates model performance on MATH500 from 36.0% to 73.6%, and improves the average performance across six common mathematical reasoning benchmarks from 17.6% to 35.7%. This result matches the performance obtained using the 1.2k DeepScaleR subset (MATH500: 73.6%, average: 35.9%), which includes the aforementioned example. Furthermore, RLVR with only two examples even slightly exceeds these results (MATH500: 74.8%, average: 36.6%). Similar substantial improvements are observed across various models (Qwen2.5-Math-7B, Llama3.2-3B-Instruct, DeepSeek-R1-Distill-Qwen-1.5B), RL algorithms (GRPO and PPO), and different math examples (many of which yield approximately 30% or greater improvement on MATH500 when employed as a single training example). In addition, we identify some interesting phenomena during 1-shot RLVR, including cross-domain generalization, increased frequency of self-reflection, and sustained test performance improvement even after the training accuracy has saturated, a phenomenon we term post-saturation generalization. Moreover, we verify that the effectiveness of 1-shot RLVR primarily arises from the policy gradient loss, distinguishing it from the "grokking" phenomenon. We also show the critical role of promoting exploration (e.g., by incorporating entropy loss with an appropriate coefficient) in 1-shot RLVR training. As a bonus, we observe that applying entropy loss alone, without any outcome reward, significantly enhances Qwen2.5-Math-1.5B’s performance on MATH500 by 27.4%. These findings can inspire future work on RLVR data efficiency and encourage a re-examination of both recent progress and the underlying mechanisms in RLVR.
| 2025-05-01T08:19:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3n1z/large_language_models_with_one_training_example/
|
Fearless-Elephant-81
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3n1z
| false | null |
t3_1kc3n1z
|
/r/LocalLLaMA/comments/1kc3n1z/large_language_models_with_one_training_example/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'm1HbKlh-FDbMeh47_YA1L03sKBKK8UNv__T-MC3N2M8', 'resolutions': [{'height': 139, 'url': 'https://external-preview.redd.it/EcgAe1pOPZVwvvK0bBMV6K7_l-D8CEGXL8vW-9jcQnI.jpg?width=108&crop=smart&auto=webp&s=84d14af943fbe7b1382ecce0611a994749acf818', 'width': 108}, {'height': 279, 'url': 'https://external-preview.redd.it/EcgAe1pOPZVwvvK0bBMV6K7_l-D8CEGXL8vW-9jcQnI.jpg?width=216&crop=smart&auto=webp&s=26469ef56ae466e6c7ea054479ebe105dea166ad', 'width': 216}, {'height': 414, 'url': 'https://external-preview.redd.it/EcgAe1pOPZVwvvK0bBMV6K7_l-D8CEGXL8vW-9jcQnI.jpg?width=320&crop=smart&auto=webp&s=bca1c1927c44acdf95f498113ee4e48534466ae4', 'width': 320}, {'height': 828, 'url': 'https://external-preview.redd.it/EcgAe1pOPZVwvvK0bBMV6K7_l-D8CEGXL8vW-9jcQnI.jpg?width=640&crop=smart&auto=webp&s=ef5e2e061da98dea42dddca362972cced89d7a64', 'width': 640}], 'source': {'height': 1056, 'url': 'https://external-preview.redd.it/EcgAe1pOPZVwvvK0bBMV6K7_l-D8CEGXL8vW-9jcQnI.jpg?auto=webp&s=a12bce64d450d115259faf610c3711ef7052a850', 'width': 816}, 'variants': {}}]}
|
hey im looking for a model which i can use to generate a voice okay anime voice any other type of voice okay . is there any model which i can use to run in my 16 gb ram with no graphic card laptop
| 0 |
there are model but i dont know how to do it im looking for like a good one .
is there any Chinese model like is qwen have any type of model
| 2025-05-01T08:34:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc3uai/hey_im_looking_for_a_model_which_i_can_use_to/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc3uai
| false | null |
t3_1kc3uai
|
/r/LocalLLaMA/comments/1kc3uai/hey_im_looking_for_a_model_which_i_can_use_to/
| false | false |
self
| 0 | null |
How good is llama at tool calling?
| 1 |
[removed]
| 2025-05-01T09:16:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc4el2/how_good_is_llama_at_tool_calling/
|
virtualdweller
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc4el2
| false | null |
t3_1kc4el2
|
/r/LocalLLaMA/comments/1kc4el2/how_good_is_llama_at_tool_calling/
| false | false |
self
| 1 | null |
Seeking help for laptop setup
| 1 |
Hi,
I've recently created an Agentic RAG system for automatic document creation, and have been utilizing the Gemma3-12B-Q4 model on Ollama with required context limit of 20k. This has been running as expected on my personal desktop, but i now have to use confidential files from work, and have been forced to use a work-laptop.
Now, this computer has a Nvidia A1000 4GB VRAM and Intel 12600HX (12 cores, 16 hyperthreads) with 32 GB RAM, and i'm affraid that i can not run the same model consistently on the GPU.
So my question is, if someone could help me with tips on how i best utilize the hardware, ie. maybe run on the CPU or combined? I would like it to be that exact model, as that is the one i have developed prompts for, but potentially the Qwen3 model can be a replacement of that is more feasible.
Thanks in advance!
| 2025-05-01T09:28:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc4k7o/seeking_help_for_laptop_setup/
|
Caputperson
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc4k7o
| false | null |
t3_1kc4k7o
|
/r/LocalLLaMA/comments/1kc4k7o/seeking_help_for_laptop_setup/
| false | false |
self
| 1 | null |
Best LLM Inference engine for today?
| 26 |
Hello! I wanna migrate from Ollama and looking for a new engine for my assistant. Main requirement for it is to be as fast as possible. So that is the question, which LLM engine are you using in your workflow?
| 2025-05-01T09:29:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc4kv2/best_llm_inference_engine_for_today/
|
Nasa1423
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc4kv2
| false | null |
t3_1kc4kv2
|
/r/LocalLLaMA/comments/1kc4kv2/best_llm_inference_engine_for_today/
| false | false |
self
| 26 | null |
Ricky chat
| 1 |
[removed]
| 2025-05-01T09:56:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc4y8z/ricky_chat/
|
ReasonableCollege815
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc4y8z
| false | null |
t3_1kc4y8z
|
/r/LocalLLaMA/comments/1kc4y8z/ricky_chat/
| false | false |
self
| 1 | null |
Looking for advice on my next computer for cline + localllm
| 1 |
[removed]
| 2025-05-01T10:08:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc55ap/looking_for_advice_on_my_next_computer_for_cline/
|
endyjasmi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc55ap
| false | null |
t3_1kc55ap
|
/r/LocalLLaMA/comments/1kc55ap/looking_for_advice_on_my_next_computer_for_cline/
| false | false |
self
| 1 | null |
gwen3:30b-a3b Is sure that the code is now robust and userfriendly
| 1 | 2025-05-01T10:12:05 |
CodingKiwi_
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc572g
| false | null |
t3_1kc572g
|
/r/LocalLLaMA/comments/1kc572g/gwen330ba3b_is_sure_that_the_code_is_now_robust/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '2rCAF6dsDK_qmifpel2nwAt-SmHSpc3PkQ1cwMm9GeA', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=108&crop=smart&format=png8&s=aa6b981d8d812bdf2c4757689e5cd84fee784ded', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=216&crop=smart&format=png8&s=83bf0af1438347cf135b9e83a7a93db1c2628add', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=320&crop=smart&format=png8&s=ec70443e14b8b215ea2d912eb3b052052736bc1c', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=640&crop=smart&format=png8&s=1ca1a9f64e733fc43f8108642b021e3a8efb6294', 'width': 640}, {'height': 900, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=960&crop=smart&format=png8&s=7bf06e676b56a9b7ed281f0fcb5617dad10d2327', 'width': 960}, {'height': 1013, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=1080&crop=smart&format=png8&s=0b37cf9cd930c2de8fbe64c8ec132f85ec7ae1f3', 'width': 1080}], 'source': {'height': 2305, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?format=png8&s=562196abaf3570ebb1d47e8a763671a088c0ff49', 'width': 2457}, 'variants': {'gif': {'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=108&crop=smart&s=cabe94f846d253f4af0d859ef9ee995b7da9944d', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=216&crop=smart&s=c006cdaff0d995d470cdae814464a3148feba43d', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=320&crop=smart&s=0d92e684db291301e35a6f570c1dbd8875d28b6e', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=640&crop=smart&s=71be913f3486ac3eeb5f54ac69121a33201bbc0e', 'width': 640}, {'height': 900, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=960&crop=smart&s=b45c8473da3fb4732eefef6ff76b390716bc587b', 'width': 960}, {'height': 1013, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=1080&crop=smart&s=5e33f668c7575187a28299031eae1d9651c0f9e9', 'width': 1080}], 'source': {'height': 2305, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?s=1c8f3d04cbd248477d3302ebb33c25b33ee63530', 'width': 2457}}, 'mp4': {'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=108&format=mp4&s=638a5ce187f8b7f7d6503a6db8de19973f6dacfb', 'width': 108}, {'height': 202, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=216&format=mp4&s=197d078c6b11e7d83bbcf93a1b4a8cc806e3f03a', 'width': 216}, {'height': 300, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=320&format=mp4&s=5a6e85498ec9cd60da2a1e6bdedcacd0290d7ffc', 'width': 320}, {'height': 600, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=640&format=mp4&s=00fb50bdc33a3979f73a7f3639c99271bdc951e6', 'width': 640}, {'height': 900, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=960&format=mp4&s=1b81207367c9b0c50eebf4fc9cf80fcaee6b9685', 'width': 960}, {'height': 1013, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?width=1080&format=mp4&s=bfb4f960cba77ed03c86d01148e45663d0a13fc4', 'width': 1080}], 'source': {'height': 2305, 'url': 'https://preview.redd.it/t3hsjt3ua5ye1.gif?format=mp4&s=c9941ec3262c6caad9e06e4106de5243d26ffdec', 'width': 2457}}}}]}
|
|||
Best local ai model for text generation in non english?
| 3 |
How do you guys handle text generation for non english languages?
Gemma 3 - 4B/12/27B seems to be the best for my european language.
| 2025-05-01T10:16:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kc598u/best_local_ai_model_for_text_generation_in_non/
|
Sea-Replacement7541
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc598u
| false | null |
t3_1kc598u
|
/r/LocalLLaMA/comments/1kc598u/best_local_ai_model_for_text_generation_in_non/
| false | false |
self
| 3 | null |
Got Qwen3 MLX running on my mac as an autonomous coding agent
| 16 |
Made a quick tutorial on how to get it running not just as a chat bot, but as an autonomous chat agent that can code for you or do simple tasks. (Needs some tinkering and a very good macbook), but, still interesting, and local.
| 2025-05-01T11:24:53 |
https://localforge.dev/blog/running-qwen3-macbook-mlx
|
azakhary
|
localforge.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1kc6bzc
| false | null |
t3_1kc6bzc
|
/r/LocalLLaMA/comments/1kc6bzc/got_qwen3_mlx_running_on_my_mac_as_an_autonomous/
| false | false |
default
| 16 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.