title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly
109
2025-05-02T18:58:10
https://www.eff.org/deeplinks/2025/03/californias-ab-412-bill-could-crush-startups-and-cement-big-tech-ai-monopoly
fallingdowndizzyvr
eff.org
1970-01-01T00:00:00
0
{}
1kd8ugn
false
null
t3_1kd8ugn
/r/LocalLLaMA/comments/1kd8ugn/californias_ab_412_a_bill_that_could_crush/
false
false
https://b.thumbs.redditm…KE-djn9LEJfA.jpg
109
{'enabled': False, 'images': [{'id': 'pnM2l0i7vAocTZvsAAdGXhUM0lKZlnZvjIDwXwU1TBU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=108&crop=smart&auto=webp&s=a4dd5ebd32373854f348143afc4e8aa86945719d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=216&crop=smart&auto=webp&s=d14312c59f2bcb5881434288674156da5a0114bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=320&crop=smart&auto=webp&s=89837472f16946ec2f6662162a9fc953c2d4304e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=640&crop=smart&auto=webp&s=daf9f639f6c59cb4f53aba662744a9ec967ff77b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=960&crop=smart&auto=webp&s=0b1537554b49a628410bbb7f221ea5d3580e3316', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?width=1080&crop=smart&auto=webp&s=c4f615cb4c0eb250d01e3c4936eacb8c3fcdbad0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tQ855TR63tdv8rA2VOmKU2b9Tp1AL3zzmxRmAXxjW4Q.jpg?auto=webp&s=90975f84b74728c6f8271b7eabad27f783f93bb4', 'width': 1200}, 'variants': {}}]}
Train Better Computer-Use AI by Creating Human Demonstration Datasets
3
The C/ua team just released a new tutorial that shows how anyone with macOS can contribute to training better computer-use AI models by recording their own human demonstrations. **Why this matters:** One of the biggest challenges in developing AI that can use computers effectively is the lack of high-quality human demonstration data. Current computer-use models often fail to capture the nuanced ways humans navigate interfaces, recover from errors, and adapt to changing contexts. This tutorial walks through using C/ua's Computer-Use Interface (CUI) with a Gradio UI to: \- Record your natural computer interactions in a sandbox macOS environment \- Organize and tag your demonstrations for maximum research value \- Share your datasets on Hugging Face to advance computer-use AI research What makes human demonstrations particularly valuable is that they capture aspects of computer use that synthetic data misses: \- **Natural pacing** \- the rhythm of real human computer use \- **Error recovery** \- how humans detect and fix mistakes \- **Context-sensitive actions** \- adjusting behavior based on changing UI states You can find the blog-post here: [https://trycua.com/blog/training-computer-use-models-trajectories-1](https://trycua.com/blog/training-computer-use-models-trajectories-1) The only requirements are Python 3.10+ and macOS Sequoia. Would love to hear if anyone else has been working on computer-use AI and your thoughts on this approach to building better training datasets!
2025-05-02T19:09:18
https://www.reddit.com/r/LocalLLaMA/comments/1kd949u/train_better_computeruse_ai_by_creating_human/
Original-Thanks-8118
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd949u
false
null
t3_1kd949u
/r/LocalLLaMA/comments/1kd949u/train_better_computeruse_ai_by_creating_human/
false
false
self
3
null
Small models are funny
0
<think> Okay, the user is asking about Cornwall's relationship with Wales over the centuries. Let me start by recalling what I know. Cornwall is a county in England, but there's also a region called Cornish in Wales. Wait, no, actually, Cornwall is part of England, but there's a Cornish language and culture in Cornwall, which is different from Wales. But the user might be confusing the two. Wait, the user might be referring to Cornwall in Wales? No, Cornwall is in England. But there's a Cornish language in Cornwall, which is a Celtic language. However, there's also a Cornish language in Wales? No, that's not right. The Cornish language is spoken in Cornwall, England. But maybe the user is conflating Cornwall with the Welsh region. I need to clarify that. [...] This is using qwen3 4b Q3 K L.
2025-05-02T19:17:40
https://www.reddit.com/r/LocalLLaMA/comments/1kd9bfj/small_models_are_funny/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd9bfj
false
null
t3_1kd9bfj
/r/LocalLLaMA/comments/1kd9bfj/small_models_are_funny/
false
false
self
0
null
Text Annotation with Chain-of-Thought
1
[removed]
2025-05-02T19:28:58
https://www.reddit.com/r/LocalLLaMA/comments/1kd9kuk/text_annotation_with_chainofthought/
Actual-Strawberry717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd9kuk
false
null
t3_1kd9kuk
/r/LocalLLaMA/comments/1kd9kuk/text_annotation_with_chainofthought/
false
false
self
1
null
Sam and OpenAI come clean with OpenAI issues last week - Sycophancy
0
Lot of people were complaining about 4o behaviour last week. A deeper dive on their findings, what went wrong, and future changes they are making While this is welcome change, end of the day it’s a reminder that’s closed models will have changes you won’t be aware of and vendors won’t come clean every time and there is no way this level of transparency is enough to run mission critical workloads Long live open source models and long live LocalLLama!
2025-05-02T19:37:13
https://openai.com/index/expanding-on-sycophancy
Affectionate-Hat-536
openai.com
1970-01-01T00:00:00
0
{}
1kd9ruv
false
null
t3_1kd9ruv
/r/LocalLLaMA/comments/1kd9ruv/sam_and_openai_come_clean_with_openai_issues_last/
false
false
default
0
null
GPUStack parser detected as virus?
1
I just wanted to get feedback and thoughts on this just for peace of mind. I installed GPUStack and it is fully functional. However, Norton detected one exe file, specifically GGUF Parser, to be a Trojan. I ran it on virus total and it had all clears. Do you think Norton is just hitting a false positive because of its code structure? I allowed it since it is actually pretty good to use and unlikely that it is malicious, but still I am always cautious. Anyone else have this experience or thoughts on its parser dependency? Thanks.
2025-05-02T19:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1kd9sac/gpustack_parser_detected_as_virus/
randomoptionsdude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd9sac
false
null
t3_1kd9sac
/r/LocalLLaMA/comments/1kd9sac/gpustack_parser_detected_as_virus/
false
false
self
1
null
GPU/NPU accelerated inference on Android?
3
Does anyone know of an Android app that supports running local LLMs with GPU or NPU acceleration?
2025-05-02T19:44:49
https://www.reddit.com/r/LocalLLaMA/comments/1kd9y8v/gpunpu_accelerated_inference_on_android/
FluffyMoment2808
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd9y8v
false
null
t3_1kd9y8v
/r/LocalLLaMA/comments/1kd9y8v/gpunpu_accelerated_inference_on_android/
false
false
self
3
null
Launching MCP Superassistant
2
[removed]
2025-05-02T19:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1kd9zr4/launching_mcp_superassistant/
EfficientApartment52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kd9zr4
false
null
t3_1kd9zr4
/r/LocalLLaMA/comments/1kd9zr4/launching_mcp_superassistant/
false
false
self
2
{'enabled': False, 'images': [{'id': 'tq_1IW12GDh8aCi_a1aEualhkr-P4sHQvyPWMtagNsQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=108&crop=smart&auto=webp&s=6d6602760af8d2d9ec98185329ec9140837bc149', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=216&crop=smart&auto=webp&s=21b9c17db5538b901a54607131f47a6b2936de75', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=320&crop=smart&auto=webp&s=fdd452852f8890c1bdc4691d38ae6e3daef35f39', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=640&crop=smart&auto=webp&s=6a0335e311a47f50e6d6bbb102ac19e27ea4559a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=960&crop=smart&auto=webp&s=b313b4a9f911829b3bc740d47f3e71712e8dac80', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?width=1080&crop=smart&auto=webp&s=78d1c2ddcbc044c07027ece26d68b785282f4413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rjNWqZ9n6zaX2ncLPK2krdrWcSLzfa8KcRfAlXfAcsg.jpg?auto=webp&s=2c43d6d153707774e737722dbadd5833954e05ec', 'width': 1152}, 'variants': {}}]}
Easiest method for Local RAG on my book library?
10
I am not a coder or programmer. I have LM Studio up and running on Llama 3.1 8B. RTX 4090 + 128gb System RAM. Brand new and know very little. I want to use Calibre to convert my owned books into plain text format (I presume) to run RAG on, indexing the contents so I can retrieve quotes rapidly, and ask abstract questions about the authors opinions and views, summarize chapters and ideas, etc. What is the easiest way to do this? Haystack, Run pod (a free local version?), other? As well, it seems the 8B model I am running is only 4-bit...? I am having trouble identifying how to get a better model on my system since I have 24gb VRAM and don't need super fast speed. I'd rather have more accuracy. It also seems 13B or something with more parameters is not available for Llama 3? Could I at least do 8B 16-bit (or even 8-bit) since I have the hardware?
2025-05-02T19:52:29
https://www.reddit.com/r/LocalLLaMA/comments/1kda4jw/easiest_method_for_local_rag_on_my_book_library/
filmguy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kda4jw
false
null
t3_1kda4jw
/r/LocalLLaMA/comments/1kda4jw/easiest_method_for_local_rag_on_my_book_library/
false
false
self
10
null
Best Hardware for Qwen3-30B-A3B CPU Inference?
4
Hey folks, Like many here, I’ve been really impressed with 30B-A3B’s performance. Tested it on a few machines with different quants: * 6-year-old laptop (i5-8250U, 32GB DDR4 @ 2400 MT/s): 7 t/s (q3\_k\_xl) * i7-11 laptop (64GB DDR4): \~6-7 t/s (q4\_k\_xl) * T14 Gen5 (DDR5): 15-20 t/s (q4\_k\_xl) Solid results for usable outputs (RAG, etc.), so I’m thinking of diving deeper. Budget is $1k-2k (preferably on the lower end) for CPU inference (AM5 setup, prioritizing memory throughput over compute "power" - for the CPU... maybe a Ryzen 7 7700 (8C/16T) ?). Thoughts? Is this the right path, or should I just grab an RTX 3090 instead? Or both? 😅
2025-05-02T19:57:00
https://www.reddit.com/r/LocalLLaMA/comments/1kda8bg/best_hardware_for_qwen330ba3b_cpu_inference/
ColdImplement1319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kda8bg
false
null
t3_1kda8bg
/r/LocalLLaMA/comments/1kda8bg/best_hardware_for_qwen330ba3b_cpu_inference/
false
false
self
4
null
Bought 3090, need emotional support
1
[removed]
2025-05-02T20:05:19
https://www.reddit.com/r/LocalLLaMA/comments/1kdafo8/bought_3090_need_emotional_support/
edeche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdafo8
false
null
t3_1kdafo8
/r/LocalLLaMA/comments/1kdafo8/bought_3090_need_emotional_support/
false
false
self
1
null
What is RAG
0
How do i go about learning about it
2025-05-02T20:22:18
https://www.reddit.com/r/LocalLLaMA/comments/1kdatw6/what_is_rag/
Both-Drama-8561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdatw6
false
null
t3_1kdatw6
/r/LocalLLaMA/comments/1kdatw6/what_is_rag/
false
false
self
0
null
Best models for “rizz”
0
Wondering if anyone has suggestions on what models are currently the best at rizz as well as relevant prompts. I’m trying to setup an agent which can partially automate dating app interactions. This is a serious post, not trolling.
2025-05-02T20:30:56
https://www.reddit.com/r/LocalLLaMA/comments/1kdb1go/best_models_for_rizz/
GenericName-2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdb1go
false
null
t3_1kdb1go
/r/LocalLLaMA/comments/1kdb1go/best_models_for_rizz/
false
false
self
0
null
Any idea why Qwen3 models are not showing in Aider or LMArena benchmarks?
16
Most of the other models used to be tested and listed in those benchmarks on the same day; however, I still can't find Qwen3 in either!
2025-05-02T20:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1kdb7t1/any_idea_why_qwen3_models_are_not_showing_in/
m_abdelfattah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdb7t1
false
null
t3_1kdb7t1
/r/LocalLLaMA/comments/1kdb7t1/any_idea_why_qwen3_models_are_not_showing_in/
false
false
self
16
null
There is a big difference between use LM-Studio, Ollama, LLama.cpp?
43
Im mean for the use case of chat with the LLM. Not about others possible purpose. Just that. Im very new about this topic of LocalLLM. I ask my question to chatgpt and it says things that are not true, or at least are not true in the new version of LM-studio. I try both LM-studio and Ollama.... i cant install Llama.cpp in my fedora 42... About the two i try i dont notice nothing relevant, but of course, i do not make any test, etc. So, for you that make test and have experience with this, JUST for chat about philosophy, there is a difference choosing between this? thanks
2025-05-02T20:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1kdbamc/there_is_a_big_difference_between_use_lmstudio/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdbamc
false
null
t3_1kdbamc
/r/LocalLLaMA/comments/1kdbamc/there_is_a_big_difference_between_use_lmstudio/
false
false
self
43
null
Which LLM for coding in my little machine?
8
I have a 8vram and 32 ram. What LLM just for code i can run? Thanks
2025-05-02T20:45:41
https://www.reddit.com/r/LocalLLaMA/comments/1kdbdok/which_llm_for_coding_in_my_little_machine/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdbdok
false
null
t3_1kdbdok
/r/LocalLLaMA/comments/1kdbdok/which_llm_for_coding_in_my_little_machine/
false
false
self
8
null
OK, MoE IS awesome
151
Recently I wrote posted this: [https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe\_is\_cool\_but\_does\_not\_solve\_speed\_when\_it/](https://www.reddit.com/r/LocalLLaMA/comments/1kc6cp7/moe_is_cool_but_does_not_solve_speed_when_it/) I now want to correct myself as I have figured out that simply reducing a few layers (from 48 - 40) gives me **massive** more context! I did not expect that as it seems that context VRAM / RAM consumption is not bound to total parameter count here but to the relatively tiny parameter count of the active experts! A normal 32B non-MoE model would require much more GB to achieve the same context length! So with that setting I can safely have a context window of over 35k tokens with an initial speed of \~26 Tk/s instead of 109 Tk/s full speed. (42154 context length = 22.8 GB VRAM idle, will grow when in use so I estimate 35K is safe) -> This is without flash attention or KV cache quantization, so even more should be possible with a single RTX 3090 That means with two RTX 3090 (only have one) I probably could use the full **131k context** window with nice speed with **qwen3-30b-a3b-128k**. (Q4\_K\_M) So to conclude MoE solves the RAM consumption problem to a high degree, not fully but it improves the situation. EDIT: WITH flash attn and K and V cache quantization Q8 I get to over 100k context and 21.9 GB VRAM IDLE (will grow on usage, so IDK how much is really usable)
2025-05-02T21:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1kdbt84/ok_moe_is_awesome/
Ok-Scarcity-7875
self.LocalLLaMA
2025-05-02T21:17:13
0
{}
1kdbt84
false
null
t3_1kdbt84
/r/LocalLLaMA/comments/1kdbt84/ok_moe_is_awesome/
false
false
self
151
null
Fastest inference engine for Single Nvidia Card for a single user?
5
Absolute fastest engine to run models locally for an NVIDIA GPU and possibly a GUI to connect it to.
2025-05-02T21:04:20
https://www.reddit.com/r/LocalLLaMA/comments/1kdbtfp/fastest_inference_engine_for_single_nvidia_card/
R46H4V
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdbtfp
false
null
t3_1kdbtfp
/r/LocalLLaMA/comments/1kdbtfp/fastest_inference_engine_for_single_nvidia_card/
false
false
self
5
null
Voice-to-Text Causing Thread Echo — Has Anyone Seen This?
1
[removed]
2025-05-02T21:10:52
https://www.reddit.com/r/LocalLLaMA/comments/1kdbz09/voicetotext_causing_thread_echo_has_anyone_seen/
Oceanae
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdbz09
false
null
t3_1kdbz09
/r/LocalLLaMA/comments/1kdbz09/voicetotext_causing_thread_echo_has_anyone_seen/
false
false
self
1
null
RLHF WARNING: Excess politeness can trigger infinite praise loops.
35
2025-05-02T21:21:56
https://i.redd.it/96z9zs5arfye1.png
phoneixAdi
i.redd.it
1970-01-01T00:00:00
0
{}
1kdc83w
false
null
t3_1kdc83w
/r/LocalLLaMA/comments/1kdc83w/rlhf_warning_excess_politeness_can_trigger/
false
false
https://b.thumbs.redditm…8SQ4yPRXK1Lw.jpg
35
{'enabled': True, 'images': [{'id': 'WK8T79apMtMIDTQEJi45olIcX2TxVwCGeAoI_4zNWBo', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?width=108&crop=smart&auto=webp&s=590c6aaf5fdf74d544318e93e3c60b3643c78d6f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?width=216&crop=smart&auto=webp&s=a3d0c996239a065b4387a771b0f3f91389667ea6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?width=320&crop=smart&auto=webp&s=b4eac0cd054fcc2c2a27c34e75bc850f57ca43bb', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?width=640&crop=smart&auto=webp&s=b5eb0424e34a2e3bcd84026cd84123b8f03bf3b6', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?width=960&crop=smart&auto=webp&s=2449de6a130144e1691e258bd5206241907d81fd', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/96z9zs5arfye1.png?auto=webp&s=4982cb1c3fed1a1b2092fb6e042a7dde06daf335', 'width': 1024}, 'variants': {}}]}
Meta AI latest work: LLM pretraining on consumer-graded GPU
48
Meta AI latest work: LLM pretraining on consumer-graded GPU Title: GaLore 2: Large-Scale LLM Pre-Training by Gradient Low-Rank Projection [https://www.arxiv.org/abs/2504.20437](https://www.arxiv.org/abs/2504.20437) Large language models (LLMs) have revolutionized natural language understanding and generation but face significant memory bottlenecks during training. GaLore, Gradient Low-Rank Projection, addresses this issue by leveraging the inherent low-rank structure of weight gradients, enabling substantial memory savings without sacrificing performance. Recent works further extend GaLore from various aspects, including low-bit quantization and higher-order tensor structures. However, there are several remaining challenges for GaLore, such as the computational overhead of SVD for subspace updates and the integration with state-of-the-art training parallelization strategies (e.g., FSDP). In this paper, we present GaLore 2, an efficient and scalable GaLore framework that addresses these challenges and incorporates recent advancements. In addition, we demonstrate the scalability of GaLore 2 by pre-training Llama 7B from scratch using up to 500 billion training tokens, highlighting its potential impact on real LLM pre-training scenarios.
2025-05-02T21:28:37
https://www.reddit.com/r/LocalLLaMA/comments/1kdcdjd/meta_ai_latest_work_llm_pretraining_on/
Dense-Smf-6032
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdcdjd
false
null
t3_1kdcdjd
/r/LocalLLaMA/comments/1kdcdjd/meta_ai_latest_work_llm_pretraining_on/
false
false
self
48
null
GSM8k -flexible extract or strict match ?
1
[removed]
2025-05-02T21:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1kdcdtj/gsm8k_flexible_extract_or_strict_match/
StrawberryJunior3030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdcdtj
false
null
t3_1kdcdtj
/r/LocalLLaMA/comments/1kdcdtj/gsm8k_flexible_extract_or_strict_match/
false
false
self
1
null
Which model has the best personality/vibes (open + closed)?
7
Hi guys, I just wanted to get your opinions on which model has the best personality/vibes? For me: GPT 4o is a beg and pick me Gemini Pro and Flash just parrots back what you say to it Qwen3 sometimes says the most unexpected things that are so silly it's funny after overthinking for ages I know people hate on it, but llama 3.1 405b was so good and unhinged since it had so much Facebook data. The LLaMA 4 models are such a big let down since they're so restricted.
2025-05-02T21:36:29
https://www.reddit.com/r/LocalLLaMA/comments/1kdck1b/which_model_has_the_best_personalityvibes_open/
z_3454_pfk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdck1b
false
null
t3_1kdck1b
/r/LocalLLaMA/comments/1kdck1b/which_model_has_the_best_personalityvibes_open/
false
false
self
7
null
Foundation-Sec-8B Released (Cisco's Security-Focused Base Model)
37
Cisco's Foundation AI team just released Foundation-Sec-8B, a security-focused base model specifically designed for cybersecurity applications. It's a non-instruct, non-chat, non-reasoning model custom-tuned with security data. They announced follow up open-weight releases for the others. This model, in the meantime, is designed to provide foundations for security tasks and vulnerability analysis. Paper: https://arxiv.org/abs/2504.21039
2025-05-02T21:37:16
https://huggingface.co/fdtn-ai/Foundation-Sec-8B
Acceptable_Zombie136
huggingface.co
1970-01-01T00:00:00
0
{}
1kdckor
false
null
t3_1kdckor
/r/LocalLLaMA/comments/1kdckor/foundationsec8b_released_ciscos_securityfocused/
false
false
https://b.thumbs.redditm…M9IPd_QvRi-c.jpg
37
{'enabled': False, 'images': [{'id': 'xQsYt0pu7lZz71pKGFGvvuybG7Pe5rj5IENzv2SNzzc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=108&crop=smart&auto=webp&s=957350135bda5682b7fc2ffcb58215bbbf10d359', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=216&crop=smart&auto=webp&s=e3792bf1d402240ef50b47ca7f7f0de09a262056', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=320&crop=smart&auto=webp&s=dafb2b5addf524d3271893ee07e0929e9a4b9236', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=640&crop=smart&auto=webp&s=59b54b351bd9bf54e04a3f9b5e9aea16698e43f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=960&crop=smart&auto=webp&s=e3ea1063032765152eab852452414ee9e6576cf4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?width=1080&crop=smart&auto=webp&s=8d94a77915abea07999d2f9ddf3ddc5d3f9ac73d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eoi1XEs1MOW67HnRQgklZ63UTpA6tPP_Y74nMsS2qug.jpg?auto=webp&s=892c6974a34702d6da9269bf6a68f972bdf1187d', 'width': 1200}, 'variants': {}}]}
What graphics card should I buy? Which llama/qwent (etc.) model should I choose? Please help me, I'm a bit lost...
5
Well, I'm not a developer, far from it. I don't know anything about code, and I don't really intend to get into it. I'm just a privacy-conscious user who would like to use a local AI model to: - convert speech to text (hopefully understand medical language, or maybe learn it) - format text and integrate it into Obsidian-like note-taking software - monitor the literature for new scientific articles and summarize them - be my personal assistant (for very important questions like: How do I get glue out of my daughter's hair? Draw me a unicorn to paint? Pain au chocolat or chocolatine?) - if possible under Linux So: 1 - Is it possible? 2 - With which model(s)? Llama? Gemma? Qwent? 3 - What graphics card should I get for this purpose? (Knowing that my budget is around 1000€)
2025-05-02T21:46:35
https://www.reddit.com/r/LocalLLaMA/comments/1kdcs5a/what_graphics_card_should_i_buy_which_llamaqwent/
ed0c
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdcs5a
false
null
t3_1kdcs5a
/r/LocalLLaMA/comments/1kdcs5a/what_graphics_card_should_i_buy_which_llamaqwent/
false
false
self
5
null
Getting the same inference speed (13 t/s) with Qwen 30B-A3B on both CPU and GPU; Is this abnormal?
1
[removed]
2025-05-02T21:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1kdctk4/getting_the_same_inference_speed_13_ts_with_qwen/
yungfishstick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdctk4
false
null
t3_1kdctk4
/r/LocalLLaMA/comments/1kdctk4/getting_the_same_inference_speed_13_ts_with_qwen/
false
false
self
1
null
Qwen3 32b Q8 on 3090 + 3060 + 3060
116
Building LocalLlama machine – Episode 2: Motherboard with 4 PCI-E slots [In the previous episode](https://www.reddit.com/r/LocalLLaMA/comments/1kbnoyj/qwen3_on_2008_motherboard/) I was testing Qwen3 on motherboard from 2008, now I was able to put **3060+3060+3090** into **X399**. I’ll likely need to use risers—both 3060s are touching, and one of them is running a bit **hot**. Eventually, I plan to add a second 3090, so better spacing will be necessary. For the first time, I was able to run a **full 32B model in Q8** ***without*** **offloading to RAM**. I experimented with different configurations, assuming (quite reasonably!) that the 3090 is faster than the 3060. I’m seeing results between **11 and 15 tokens per second**. How fast does Qwen3 32B run on *your* system? As a bonus, I also tested the 14B model, so you can compare your results if you’re working with a smaller supercomputer. All 3 GPUs combined produced **28 t/s**, which is **slower than the 3090 alone at 49 t/s**. What’s the point of using 3060s if you can unleash the full power of a 3090? I’ll be doing a lot more testing soon, but I wanted to share my initial results here. I’ll probably try alternatives to `llama.cpp`, and I definitely need to test a large MoE model with this CPU.
2025-05-02T21:59:41
https://www.reddit.com/gallery/1kdd2zj
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1kdd2zj
false
null
t3_1kdd2zj
/r/LocalLLaMA/comments/1kdd2zj/qwen3_32b_q8_on_3090_3060_3060/
false
false
https://b.thumbs.redditm…29bDoIdXgpWQ.jpg
116
null
Mixed precision KV cache quantization, Q8 for K / Q4 for V
4
Anyone tried this? I found that Qwen3 0.6b comes with more KV heads which improves quality, but at \~4x larger VRAM usage. Qwen2.5 0.5b coder: No. of Attention Heads (GQA): 14 for Q and 2 for KV. Qwen3 0.6b: No. of Attention Heads (GQA): 16 for Q and 8 for KV. With speculative decoding, this gets costly because llama.cpp does not quantize KV cache of the **draft** model. I lost 3GB out of 24GB because of this, which forced me to lower context length from 30K to 20K on my 24GB VRAM setup. So now I'm considering more heavily quantizing KV cache for my Qwen3 32b **main** model: Q8 for K / Q4 for V instead of Q8 for both.
2025-05-02T22:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1kddcdp/mixed_precision_kv_cache_quantization_q8_for_k_q4/
AdamDhahabi
self.LocalLLaMA
2025-05-02T22:38:03
0
{}
1kddcdp
false
null
t3_1kddcdp
/r/LocalLLaMA/comments/1kddcdp/mixed_precision_kv_cache_quantization_q8_for_k_q4/
false
false
self
4
null
What's the best model that can I use locally on this PC?
1
2025-05-02T22:25:10
https://i.redd.it/ki0ek9bk2gye1.png
TheMinarctics
i.redd.it
1970-01-01T00:00:00
0
{}
1kddntb
false
null
t3_1kddntb
/r/LocalLLaMA/comments/1kddntb/whats_the_best_model_that_can_i_use_locally_on/
false
false
https://b.thumbs.redditm…_Ml-9SMOJ5Ug.jpg
1
{'enabled': True, 'images': [{'id': 'WY_LG61SquxYIxlwH0emYv_kVG_DkCwVkQB9aTzg3u4', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ki0ek9bk2gye1.png?width=108&crop=smart&auto=webp&s=8d310cf8107737795d2c1c53b3b20824174467b1', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/ki0ek9bk2gye1.png?width=216&crop=smart&auto=webp&s=836e57ba63cf1055b77d755e305fef9f167bbbdb', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/ki0ek9bk2gye1.png?width=320&crop=smart&auto=webp&s=98e99e228440f33dbd298b7acec4e8148e028911', 'width': 320}], 'source': {'height': 208, 'url': 'https://preview.redd.it/ki0ek9bk2gye1.png?auto=webp&s=0bba7f0d7e750142a39aeb4a52aa72b56ca925e3', 'width': 402}, 'variants': {}}]}
Fugly little guy - v100 32gb 7945hx build
4
Funny build I did with my son. V100 32gb, we're going to do some basic inference models and ideally a lot of image and media generation. Thinking just pop_os/w11 dual boot. No Flashpoint no problem!! Any things I should try? This will be a pure hey kids let's mess around with x y z box. If it works out well yes I will paint the fan shroud. I think it's charming!
2025-05-02T22:36:50
https://www.reddit.com/gallery/1kddx52
Only_Khlav_Khalash
reddit.com
1970-01-01T00:00:00
0
{}
1kddx52
false
null
t3_1kddx52
/r/LocalLLaMA/comments/1kddx52/fugly_little_guy_v100_32gb_7945hx_build/
false
false
https://a.thumbs.redditm…LGVLI4gl01O4.jpg
4
null
Trade off between knowledge and problem solving ability
19
I've noticed a trend where despite benchmark scores going up and companies claiming that their new small models are equivalent to older much bigger models, world knowledge of these new smaller models is worse than their larger predecessors, and often times worse than lower benchmarking models of similar sizes. I have a set of private test questions that exercise coding, engineering problem solving, system threat modelling, and also ask specific knowledge questions on a variety of topics ranging from radio protocols and technical standards to local geography, history, and landmarks. New models like Qwen 3 and GLM-4-0414 are vastly better at coding and problem solving than older models, but their knowledge is no better than older models and actually worse than some other similar sized older models. For example, Qwen 3 8B has considerably worse world knowledge in my tests than old models like Llama 3.1 8B and Gemma 2 9B. Likewise, Qwen 3 14B has much worse world knowledge than older weaker benchmarking models like Phi 4 and Gemma 3 12B. On a similar note, Granite 3.3 has slightly better coding/problem solving but slightly worse knowledge than Granite 3.2. There are some exceptions to this trend though. Gemma 3 seems to have slightly better knowledge density than Gemma 2, while also having much better coding and problem solving. Gemma 3 is still very much a knowledge and writing model, and not particularly good at coding or problem solving, but much better at that than Gemma 2. Llama 4 Maverick has superb world knowledge, much better than Qwen 3 235B-A22, and actually slightly better than DeepSeek V3 in my tests, but its coding and problem solving abilities are mediocre. Llama 4 Maverick is under-appreciated for its knowledge; there's more to being smart than just being able to make balls bounce in a rotating heptagon or drawing a pelican on a bicycle. For knowledge based Q&A, it may be the best open/local model there is currently. Anyway, what I'm getting at is that there seems to be a trade off between world knowledge and coding/problem solving ability for a given model size. Despite soaring benchmark scores, world knowledge of new models for a given size is stagnant or regressing. My guess is that this is because the training data for new models has more problem solving content and so proportionately less knowledge dense content. LLM makers have stopped publishing or highlighting scores for knowledge benchmarks like SimpleQA because those scores aren't improving and may be getting worse.
2025-05-02T22:53:09
https://www.reddit.com/r/LocalLLaMA/comments/1kde9mn/trade_off_between_knowledge_and_problem_solving/
Federal-Effective879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kde9mn
false
null
t3_1kde9mn
/r/LocalLLaMA/comments/1kde9mn/trade_off_between_knowledge_and_problem_solving/
false
false
self
19
null
Looking for less VRAM hungry alternatives to vLLM for Qwen3 models
1
On the same GPU with 24 GB VRAM, I'm able to load the Qwen3 32B AWQ and run it without issues if I use hf transformers. With vLLM, I'm barely able to load Qwen3 14B AWQ because of how much VRAM it needs to use. Limiting `gpu_memory_utilization` doesn't really help because it'll just give me OOM errors. The problem is how naturally VRAM hungry vLLM is. I don't want to limit the context length of my model since I don't have to do it in transformers just to be able to load a model. So what to do? I've tried SGLang, doesn't even start without nvcc (I have torch compiled, not sure why it keeps needing nvcc to compile torch again). I think there's ktransformers and llamacpp but not sure if they are any good with Qwen3 models. I want to be able to use AWQ models. What do you use? What are your settings? Is there a way to make vLLM less hungry?
2025-05-02T23:04:16
https://www.reddit.com/r/LocalLLaMA/comments/1kdeibm/looking_for_less_vram_hungry_alternatives_to_vllm/
No-Break-7922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdeibm
false
null
t3_1kdeibm
/r/LocalLLaMA/comments/1kdeibm/looking_for_less_vram_hungry_alternatives_to_vllm/
false
false
self
1
null
Getting the same inference speed (13 t/s) with Qwen 30B-A3B on both CPU and GPU; Is this abnormal?
1
[removed]
2025-05-02T23:04:23
[deleted]
1970-01-01T00:00:00
0
{}
1kdeier
false
null
t3_1kdeier
/r/LocalLLaMA/comments/1kdeier/getting_the_same_inference_speed_13_ts_with_qwen/
false
false
default
1
null
How useful are llm's as knowledge bases?
7
LLM's have lot's of knowledge but llm's can hallucinate. They also have a poor judgement of the accuracy of their own information. I have found that when it hallucinates, it often hallucinates things that are plausible or close to the truth but still wrong. What is your experience of using llm's as a source of knowledge?
2025-05-02T23:16:27
https://www.reddit.com/r/LocalLLaMA/comments/1kderkz/how_useful_are_llms_as_knowledge_bases/
Tracing1701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kderkz
false
null
t3_1kderkz
/r/LocalLLaMA/comments/1kderkz/how_useful_are_llms_as_knowledge_bases/
false
false
self
7
null
Built a persistent local AI with long-term memory and identity—need hardware help, not money, no strings.
1
[removed]
2025-05-02T23:46:30
https://www.reddit.com/r/LocalLLaMA/comments/1kdfdwo/built_a_persistent_local_ai_with_longterm_memory/
Glad-Section9499
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdfdwo
false
null
t3_1kdfdwo
/r/LocalLLaMA/comments/1kdfdwo/built_a_persistent_local_ai_with_longterm_memory/
false
false
self
1
null
LLM progress nowadays is more about baking in more problems and knowledge than any groundbreaking innovations. For vast amount of problems, current models are in their final state.
18
What's your opinion about the above statement? Am I alone in gut feelings that we've arrived?
2025-05-03T00:07:00
https://www.reddit.com/r/LocalLLaMA/comments/1kdft52/llm_progress_nowadays_is_more_about_baking_in/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdft52
false
null
t3_1kdft52
/r/LocalLLaMA/comments/1kdft52/llm_progress_nowadays_is_more_about_baking_in/
false
false
self
18
null
Ollama: Qwen3-30b-a3b Faster on CPU over GPU
9
Is it possible that using CPU is better than GPU? When I use just CPU (18 Core E5-2699 V3 128GB RAM**)** I get 19 response\_tokens/s. But with GPU (Asus Phoenix RTX 3060 12GB VRAM) I only get 4 response\_tokens/s.
2025-05-03T00:29:06
https://www.reddit.com/r/LocalLLaMA/comments/1kdg8iw/ollama_qwen330ba3b_faster_on_cpu_over_gpu/
benz1800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdg8iw
false
null
t3_1kdg8iw
/r/LocalLLaMA/comments/1kdg8iw/ollama_qwen330ba3b_faster_on_cpu_over_gpu/
false
false
self
9
null
Chapter summaries using qwen3:30b-a3b
17
My sci-fi novel is about 85,000 words (500,000 characters) and split across 17 chapters. Due to its length, a shell script is used to summarize each chapter while including the summaries of all previous chapters for reference. In theory, this will shorten the input length (and processing time) significantly. In each test, `ollama serve` is started with a particular context length, for example: ``` OLLAMA_CONTEXT_LENGTH=65535 ollama serve ``` The hardware is an NVIDIA T1000 8GB GPU and an AMD Ryzen 5 7600 6-Core Processor. Most tests used ollama 0.6.6. Now that ollama 0.6.7 is released, it's possible to try out llama4. A script produces chapter summaries. At the end, the script uses xmlstarlet and xmllint to remove the `<think>` tag from the summary. Here are the results so far: * **qwen3:30b-a3b** -- 32768 context. Several minor mistakes, overall quite accurate, stays true to the story, and takes hours to complete. Not much editing required. * **llama3.3:70b-instruct-q4_K_M** -- 65535 context. Starts strong, eventually makes conceptual errors, loses its mind after chapter 14. Resetting gets it back on track, although still goes off the rails. I made numerous paragraph cuts to previous chapter summaries when re-running. Goes very slowly after 4 or 5 chapters, taking a long time to complete each chapter. I stopped at chapter 16 (of 17) because it was making things up. Lots of editing required. * **phi4-reasoning** -- 32768 context. Gets many details wrong. * **phi4-reasoning:plus** -- 32768 context. Gets details wrong. * **deepseek-r1:32b** -- 32768 context. Makes stuff up. llama4:scout is up next, possibly followed by a re-test of gemma3 and granite3, depending on the results. Here are the file sizes for the summaries, so you can see they aren't blowing up in size: $ wc -c summaries.qwen3/*txt | sed 's/summaries\.qwen3\///' 1202 01.txt 1683 02.txt 1664 03.txt 1860 04.txt 1816 05.txt 1859 06.txt 1726 07.txt 1512 08.txt 1574 09.txt 1394 10.txt 1552 11.txt 1476 12.txt 1568 13.txt 2093 14.txt 1230 15.txt 1747 16.txt 1391 17.txt 27347 total The chapters themselves are larger (chapter 1 is the smallest, has a summary as the seed, and so is skipped): $ wc -c ??.txt 20094 02.txt 25294 03.txt 23329 04.txt 20615 05.txt 26636 06.txt 26183 07.txt 27117 08.txt 34589 09.txt 34317 10.txt 31550 11.txt 22307 12.txt 28632 13.txt 40821 14.txt 45822 15.txt 41490 16.txt 43271 17.txt Here's the script that runs ollama, including the prompt: #!/usr/bin/env bash OUTDIR=summaries mkdir -p "${OUTDIR}" readonly MODEL="llama4:scout" BASE_PROMPT="You are a professional editor specializing in science fiction. Your task is to summarize a chapter faithfully without altering the user's ideas. The chapter text follows the 'CHAPTER TO SUMMARIZE:' marker below. Focus on key plot developments, character insights, and thematic elements. When ### appears in the text, it indicates separate scenes, so summarize each scene in its own paragraph, maintaining clear distinction between them. Write in clear, engaging language that captures the essence of each part. Provide the summary without introductory phrases. Text between 'PREVIOUS SUMMARIES FOR CONTEXT:' and 'CHAPTER TO SUMMARIZE:' is background information only, not content to summarize. Plain text and prosal form, a couple of paragraphs, 300 to 500 words." for f in chapter/??.txt; do prompt="${BASE_PROMPT}" filename=$(basename "$f") summaries="$(awk 'FNR==1 {print FILENAME ":"} 1' ${OUTDIR}/*.txt 2>/dev/null)" outfile="${OUTDIR}/${filename}" prompt+=$'\n\n' if [ -n "${summaries}" ]; then prompt+="PREVIOUS SUMMARIES FOR CONTEXT:"$'\n\n'$"${summaries}"$'\n\n' fi prompt+="--------------"$'\n\n' prompt+="CHAPTER TO SUMMARIZE:"$'\n\n'"$(cat "$f")"$'\n\n' echo "${prompt}" | ollama run ${MODEL} > "${outfile}" echo "<root>$(cat ${outfile})</root>" | \ xmlstarlet ed -d '//think' | \ xmllint --xpath 'string(/)' - > "${OUTDIR}/result.txt" mv -f "${OUTDIR}/result.txt" "${outfile}" sleep 1 done Here's the prompt with word wrapping: > You are a professional editor specializing in science fiction. Your task is to summarize a chapter faithfully without altering the user's ideas. The chapter text follows the 'CHAPTER TO SUMMARIZE:' marker below. Focus on key plot developments, character insights, and thematic elements. When ### appears in the text, it indicates separate scenes, so summarize each scene in its own paragraph, maintaining clear distinction between them. Write in clear, engaging language that captures the essence of each part. Provide the summary without introductory phrases. Text between 'PREVIOUS SUMMARIES FOR CONTEXT:' and 'CHAPTER TO SUMMARIZE:' is background information only, not content to summarize. Plain text and prosal form, a couple of paragraphs, 300 to 500 words.
2025-05-03T00:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1kdge66/chapter_summaries_using_qwen330ba3b/
autonoma_2042
self.LocalLLaMA
2025-05-03T00:43:22
0
{}
1kdge66
false
null
t3_1kdge66
/r/LocalLLaMA/comments/1kdge66/chapter_summaries_using_qwen330ba3b/
false
false
self
17
null
Qwen 3 30B Pruned to 16B by Removing “Dead” Experts, 235B -> 150B Coming Soon!
1
[removed]
2025-05-03T01:12:08
[deleted]
1970-01-01T00:00:00
0
{}
1kdh2pr
false
null
t3_1kdh2pr
/r/LocalLLaMA/comments/1kdh2pr/qwen_3_30b_pruned_to_16b_by_removing_dead_experts/
false
false
default
1
null
Qwen 3 30B Pruned to 16B by Removing “Dead” Experts, 235B -> 150B Coming Soon!
1
[removed]
2025-05-03T01:13:48
[deleted]
1970-01-01T00:00:00
0
{}
1kdh3ub
false
null
t3_1kdh3ub
/r/LocalLLaMA/comments/1kdh3ub/qwen_3_30b_pruned_to_16b_by_removing_dead_experts/
false
false
default
1
null
Qwen 3 30B Pruned to 16B by Removing “Dead” Experts, 235B Pruned to 150B Coming Soon!
1
2025-05-03T01:14:40
https://huggingface.co/kalomaze/Qwen3-16B-A3B
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1kdh4fn
false
null
t3_1kdh4fn
/r/LocalLLaMA/comments/1kdh4fn/qwen_3_30b_pruned_to_16b_by_removing_dead_experts/
false
false
https://b.thumbs.redditm…j6BDNvaIyB3Q.jpg
1
{'enabled': False, 'images': [{'id': 'PLdAqga4Ibf2MXVtUkSz8cn7jAspOjB4qaMBTep7H-I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=108&crop=smart&auto=webp&s=a2e234728c54b4f662abcc84bb4a5477fab245cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=216&crop=smart&auto=webp&s=9287125d47c29fe747a331a7ea8882238e7282f7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=320&crop=smart&auto=webp&s=db09a7f4ff9549f5a4dcedd3f49a48cb316f99be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=640&crop=smart&auto=webp&s=20bab6ab4e35ba7f6eda9be1e741fcf3e281b406', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=960&crop=smart&auto=webp&s=1ab508af274dcb70500ffe63f64ebfcee168e42d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=1080&crop=smart&auto=webp&s=620207c371111512fe88408708e1835fc0bbe28c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?auto=webp&s=d3b5d7a23a7f5cbe63d723616da78538347ecef9', 'width': 1200}, 'variants': {}}]}
Qwen 3 30B Pruned to 16B by Leveraging Biased Router Distributions, 235B Pruned to 150B Coming Soon!
438
2025-05-03T01:18:05
https://huggingface.co/kalomaze/Qwen3-16B-A3B
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1kdh6rl
false
null
t3_1kdh6rl
/r/LocalLLaMA/comments/1kdh6rl/qwen_3_30b_pruned_to_16b_by_leveraging_biased/
false
false
https://b.thumbs.redditm…j6BDNvaIyB3Q.jpg
438
{'enabled': False, 'images': [{'id': 'PLdAqga4Ibf2MXVtUkSz8cn7jAspOjB4qaMBTep7H-I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=108&crop=smart&auto=webp&s=a2e234728c54b4f662abcc84bb4a5477fab245cb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=216&crop=smart&auto=webp&s=9287125d47c29fe747a331a7ea8882238e7282f7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=320&crop=smart&auto=webp&s=db09a7f4ff9549f5a4dcedd3f49a48cb316f99be', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=640&crop=smart&auto=webp&s=20bab6ab4e35ba7f6eda9be1e741fcf3e281b406', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=960&crop=smart&auto=webp&s=1ab508af274dcb70500ffe63f64ebfcee168e42d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?width=1080&crop=smart&auto=webp&s=620207c371111512fe88408708e1835fc0bbe28c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ySpO_7RYQ0SVsPSOMMWIe_bDz8SWExrfxn-1JYoQHZ8.jpg?auto=webp&s=d3b5d7a23a7f5cbe63d723616da78538347ecef9', 'width': 1200}, 'variants': {}}]}
Qwen/Qwen3-32B-AWQ just dropped for vLLM users!
1
[removed]
2025-05-03T01:48:01
https://www.reddit.com/r/LocalLLaMA/comments/1kdhqjk/qwenqwen332bawq_just_dropped_for_vllm_users/
Training-Village1450
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdhqjk
false
null
t3_1kdhqjk
/r/LocalLLaMA/comments/1kdhqjk/qwenqwen332bawq_just_dropped_for_vllm_users/
false
false
self
1
null
Advice on Quant Size for GPU / CPU split for for Qwen3 235B-A22B (and in general?)
5
Hey locallamas! I've been running models exclusively in VRAM to this point. My rubric for selecting a quant has always been: "What's the largest quant I can run that will fit within my VRAM given 32k context?" Looking for advice on what quant size to try with [Qwen3 235B-A22B](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF) knowing that I will need to load some of the model into RAM. I'd like to avoid downloading multiple 100-200 GB files. [Unsloth Qwen3-235B-A22B Quants](https://preview.redd.it/19g42ijd7hye1.png?width=628&format=png&auto=webp&s=197657578ffaf3c7b08616766582d32a0529ac8c) I have a reasonably powerful local rig: Single socket AMD EPYC 7402P with 512 GB of 2400 MT/s RAM and 6 RTX A4000s. I assume my specific setup is relevant but that there is probably a rule of thumb or at least some intuition that you all can share. I was thinking of going with one of the Q4s initially because that's typically the lowest I'm willing to go with GGUF. Then I stopped myself and thought I should ask some professionals.
2025-05-03T02:17:40
https://www.reddit.com/r/LocalLLaMA/comments/1kdia1k/advice_on_quant_size_for_gpu_cpu_split_for_for/
x0xxin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdia1k
false
null
t3_1kdia1k
/r/LocalLLaMA/comments/1kdia1k/advice_on_quant_size_for_gpu_cpu_split_for_for/
false
false
https://b.thumbs.redditm…Afi2gSMyOLTg.jpg
5
{'enabled': False, 'images': [{'id': 'rx3n0qdAkHK6iirdeW-jkcWJMyS3AefZQJIArfhCVr0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=108&crop=smart&auto=webp&s=6c9699e848c6744d3541b58d5088430c8f383c39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=216&crop=smart&auto=webp&s=a5c829e91fd83f00c99ca445eabd68bc7f4f53c9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=320&crop=smart&auto=webp&s=26c54064d3772f765b4fc33cc0a988816e2d834b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=640&crop=smart&auto=webp&s=7e5ca681836e6d44c749694b55962298027a4b34', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=960&crop=smart&auto=webp&s=5ac8b2ba7f716bb32e608e5ac9247afbd01f8314', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?width=1080&crop=smart&auto=webp&s=9bfceef4bfafaa2b50be907d375f3ff3ec1e9200', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ru6A7MPZmszNmMavBIJXZ-Qf6-aHWBhAoaKIh7tO0Ks.jpg?auto=webp&s=a4b0f149dbb55e174dcd3c9079bf234ac779a566', 'width': 1200}, 'variants': {}}]}
3x3060, 1x3090, 1x4080 SUPER
35
Qwen 32b q8 64k context - 20 tok/s Llama 3.3 70b 16k context - 12 tok/s Using Ollama because my board has too little RAM for vLLM. Upgrading the board this weekend:)
2025-05-03T02:22:10
https://www.reddit.com/gallery/1kdiczn
kevin_1994
reddit.com
1970-01-01T00:00:00
0
{}
1kdiczn
false
null
t3_1kdiczn
/r/LocalLLaMA/comments/1kdiczn/3x3060_1x3090_1x4080_super/
false
false
https://b.thumbs.redditm…5Hx5vvdjsfKg.jpg
35
null
whats the best ai for coding
1
[removed]
2025-05-03T02:27:23
https://www.reddit.com/r/LocalLLaMA/comments/1kdigh6/whats_the_best_ai_for_coding/
Minute_Window_9258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdigh6
false
null
t3_1kdigh6
/r/LocalLLaMA/comments/1kdigh6/whats_the_best_ai_for_coding/
false
false
self
1
null
Are instruct or text models better for coding?
12
Curious to hear what folks have found. There’s so many models to choose from, I’m not sure how to evaluate the general options when a new one becomes available
2025-05-03T02:28:08
https://www.reddit.com/r/LocalLLaMA/comments/1kdigx8/are_instruct_or_text_models_better_for_coding/
SugarSafe1881
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdigx8
false
null
t3_1kdigx8
/r/LocalLLaMA/comments/1kdigx8/are_instruct_or_text_models_better_for_coding/
false
false
self
12
null
Getting the same inference speed (13 t/s) with Qwen 30B-A3B on both CPU and GPU; Is this abnormal?
1
[removed]
2025-05-03T02:42:39
[deleted]
1970-01-01T00:00:00
0
{}
1kdiq94
false
null
t3_1kdiq94
/r/LocalLLaMA/comments/1kdiq94/getting_the_same_inference_speed_13_ts_with_qwen/
false
false
default
1
null
Voice recreation for Mother’s Day card
1
[removed]
2025-05-03T03:02:32
https://www.reddit.com/r/LocalLLaMA/comments/1kdj36v/voice_recreation_for_mothers_day_card/
TraditionalLoan2951
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdj36v
false
null
t3_1kdj36v
/r/LocalLLaMA/comments/1kdj36v/voice_recreation_for_mothers_day_card/
false
false
self
1
null
GMKtek Evo-x2 LLM Performance
29
GMKTek claims Evo-X2 is 2.2 times faster than a 4090 in LM Studio. How so? Genuine question. I’m trying to learn more. Other than total Ram, raw specs on the 5090 blow the Mini PC away…
2025-05-03T03:06:08
https://i.redd.it/czx9oz7qghye1.jpeg
SimplestKen
i.redd.it
1970-01-01T00:00:00
0
{}
1kdj5gr
false
null
t3_1kdj5gr
/r/LocalLLaMA/comments/1kdj5gr/gmktek_evox2_llm_performance/
false
false
https://b.thumbs.redditm…NkLxh8cj-X2U.jpg
29
{'enabled': True, 'images': [{'id': 'P1vFZohI4Xhqt6W5AyNOizw6jPZ1KURLz17lW-XfbXA', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=108&crop=smart&auto=webp&s=f834413fb716b12dc05d6c5152d34f52b67e0182', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=216&crop=smart&auto=webp&s=d72f6f03ac04214b46f153436e442d8e298d0b28', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=320&crop=smart&auto=webp&s=8ab664c323c4942b205a2c867429803e4151518a', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=640&crop=smart&auto=webp&s=0f72072a2106fdc1c0a30f2f47a0fc1e43b238f0', 'width': 640}, {'height': 656, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=960&crop=smart&auto=webp&s=97d10a9acbcddd2bf6aa089565da48f91fac1bc7', 'width': 960}, {'height': 738, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?width=1080&crop=smart&auto=webp&s=57e7a27aac3cdbd0dbd1a2d21942dcd8968b5d1e', 'width': 1080}], 'source': {'height': 825, 'url': 'https://preview.redd.it/czx9oz7qghye1.jpeg?auto=webp&s=dad968509a0739eb86d418b48423569671f594f8', 'width': 1206}, 'variants': {}}]}
Good LM Studio Models for day to day tasks
1
[removed]
2025-05-03T05:26:49
https://www.reddit.com/r/LocalLLaMA/comments/1kdlif2/good_lm_studio_models_for_day_to_day_tasks/
SharathSHebbar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdlif2
false
null
t3_1kdlif2
/r/LocalLLaMA/comments/1kdlif2/good_lm_studio_models_for_day_to_day_tasks/
false
false
https://b.thumbs.redditm…EjOhH36yr50Q.jpg
1
null
Qwen 3 32B + 8B have less censorship under RAG than other Qwen 3 models.
7
Did some testing last night with all the Qwen 3 models 32B and under and noticed something really interesting. Specifically, the 32B and 8B would comply with toxic requests in the presence of RAG. For example, it would give me methods to cook meth while the models of other sizes would refuse the request. If you do a cold request, all models will refuse. It seems like RAG is the answer if you really want to get the model to comply. So far, the 8B model is a monster for its size in a RAG setup. It performs very well if it has information in the context you are looking for.
2025-05-03T05:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1kdlpwc/qwen_3_32b_8b_have_less_censorship_under_rag_than/
My_Unbiased_Opinion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdlpwc
false
null
t3_1kdlpwc
/r/LocalLLaMA/comments/1kdlpwc/qwen_3_32b_8b_have_less_censorship_under_rag_than/
false
false
self
7
null
Need help in learning how you use local models for assistance/help in coding
1
[removed]
2025-05-03T05:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1kdlq14/need_help_in_learning_how_you_use_local_models/
_HarshMallow_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdlq14
false
null
t3_1kdlq14
/r/LocalLLaMA/comments/1kdlq14/need_help_in_learning_how_you_use_local_models/
false
false
self
1
null
The little girl and her bodyguard
1
[removed]
2025-05-03T05:45:10
https://v.redd.it/j3y4rlpx8iye1
Ok-Maize-4629
v.redd.it
1970-01-01T00:00:00
0
{}
1kdlshj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j3y4rlpx8iye1/DASHPlaylist.mpd?a=1748846310%2CZTMxMzkwMTE1ZGMxNWYwYTkwZDE4ZDg1MmZlOWYwMWViYzc5ZmZhZTI3MWIxZjJjZDg1MmQ4YTA5NGM0Yjk4Nw%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/j3y4rlpx8iye1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/j3y4rlpx8iye1/HLSPlaylist.m3u8?a=1748846310%2CZDk3MDQ2Mjk1MzM3OGNiOThhMTYwM2ZhOWJmYzAzMTgzYTg1MzAyYjA2N2JmOWZmMmJjYmIyNmRmYjRhY2I0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j3y4rlpx8iye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1kdlshj
/r/LocalLLaMA/comments/1kdlshj/the_little_girl_and_her_bodyguard/
false
false
https://external-preview…a38aa3a1a7fa4268
1
{'enabled': False, 'images': [{'id': 'MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=108&crop=smart&format=pjpg&auto=webp&s=760453462a52346673e6fbedade5828004178c25', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=216&crop=smart&format=pjpg&auto=webp&s=b0996d1e0d6d6aeb1b8b0052c7e7a7874cf1be95', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=320&crop=smart&format=pjpg&auto=webp&s=37092b69695caa0925d5f24ba8d9666c82e1289b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=640&crop=smart&format=pjpg&auto=webp&s=5e5401f161e1be1a67491a7b78bc6086aaf9785f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=960&crop=smart&format=pjpg&auto=webp&s=10dcfed8738bebc9e1d6d5fe7812d992bde3b3dc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=40af2dce75fe77b905d26fd91b52d74c1de1e743', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MTB3dzJucHg4aXllMZqeS2Y6t5fyEUJco1yEZp2hJ1_gi7c4jSaaqQHP2FXK.png?format=pjpg&auto=webp&s=13c8ac651fd10f494e9561e105ff2397002e8692', 'width': 1280}, 'variants': {}}]}
Plz help in understanding ai assisted coding
1
[removed]
2025-05-03T05:45:37
https://www.reddit.com/r/LocalLLaMA/comments/1kdlsqk/plz_help_in_understanding_ai_assisted_coding/
_HarshMallow_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdlsqk
false
null
t3_1kdlsqk
/r/LocalLLaMA/comments/1kdlsqk/plz_help_in_understanding_ai_assisted_coding/
false
false
self
1
null
Multimodal RAG with Cohere + Gemini 2.5 Flash
0
**Hi everyone! 👋** I recently built a **Multimodal RAG (Retrieval-Augmented Generation)** system that can extract insights from **both text and images inside PDFs** — using **Cohere’s multimodal embeddings** and **Gemini 2.5 Flash**. 💡 **Why this matters:** Traditional RAG systems completely miss visual data — like **pie charts, tables, or infographics** — that are critical in financial or research PDFs. 📽️ **Demo Video:** https://reddit.com/link/1kdlwhp/video/07k4cb7y9iye1/player 📊 **Multimodal RAG in Action:** ✅ Upload a financial PDF ✅ Embed both text and images ✅ Ask any question — e.g., "How much % is Apple in S&P 500?" ✅ Gemini gives image-grounded answers like reading from a chart https://preview.redd.it/d9mg38r4aiye1.png?width=1989&format=png&auto=webp&s=281f36c18a3780faf2fe62bda2e67db96603d88e 🧠 **Key Highlights:** * Mixed FAISS index (text + image embeddings) * Visual grounding via Gemini 2.5 Flash * Handles questions from tables, charts, and even timelines * Fully local setup using Streamlit + FAISS 🛠️ **Tech Stack:** * **Cohere embed-v4.0** (text + image embeddings) * **Gemini 2.5 Flash** (visual question answering) * **FAISS** (for retrieval) * **pdf2image** \+ **PIL** (image conversion) * **Streamlit UI** 📌 **Full blog + source code + side-by-side demo:** 🔗 [sridhartech.hashnode.dev/beyond-text-building-multimodal-rag-systems-with-cohere-and-gemini](https://sridhartech.hashnode.dev/beyond-text-building-multimodal-rag-systems-with-cohere-and-gemini) Would love to hear your thoughts or any feedback! 😊
2025-05-03T05:52:32
https://www.reddit.com/r/LocalLLaMA/comments/1kdlwhp/multimodal_rag_with_cohere_gemini_25_flash/
srireddit2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdlwhp
false
null
t3_1kdlwhp
/r/LocalLLaMA/comments/1kdlwhp/multimodal_rag_with_cohere_gemini_25_flash/
false
false
self
0
null
Are people here aware how good a deal AMD APUs are for LLMs, price/performance-wise?
0
I just found out that Ryzen APUs have something close to Apple’s unified memory. Sure, it's slower, maybe half the speed, but it costs WAY less. This exact mini PC (Ryzen 7735HS) is around $400 on Amazon. It runs Qwen3 30B A3B Q3 at \~25 tokens/sec. So for $400 total, you get solid performance, no VRAM swapping hell like with discrete GPUs, and enough shared memory to load 20+GB models. How many people here are even aware of this? Is something like this the future of inference? :D
2025-05-03T06:33:07
https://www.reddit.com/r/LocalLLaMA/comments/1kdmi9m/are_people_here_aware_how_good_a_deal_amd_apus/
Sidran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdmi9m
false
null
t3_1kdmi9m
/r/LocalLLaMA/comments/1kdmi9m/are_people_here_aware_how_good_a_deal_amd_apus/
false
false
self
0
null
Local search business API for Agents
1
[removed]
2025-05-03T06:36:41
https://www.reddit.com/r/LocalLLaMA/comments/1kdmk37/local_search_business_api_for_agents/
EndComfortable2089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdmk37
false
null
t3_1kdmk37
/r/LocalLLaMA/comments/1kdmk37/local_search_business_api_for_agents/
false
false
self
1
null
Tools on local models
1
[removed]
2025-05-03T06:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1kdmukv/tools_on_local_models/
Bradymodion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdmukv
false
null
t3_1kdmukv
/r/LocalLLaMA/comments/1kdmukv/tools_on_local_models/
false
false
self
1
null
The little girl and her bodyguard youtub@
1
[removed]
2025-05-03T07:22:11
https://v.redd.it/mowx519ypiye1
Ok-Maize-4629
v.redd.it
1970-01-01T00:00:00
0
{}
1kdn8ev
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/mowx519ypiye1/DASHPlaylist.mpd?a=1748848956%2CYTlkOWM5MmNmZjEyNzVkYWNmNGViZTUzOTkyNmNjMzNhY2RjYWJjNmU1ZTg0OWJjMGViZDZlOWI3MjU2Y2E4ZQ%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/mowx519ypiye1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/mowx519ypiye1/HLSPlaylist.m3u8?a=1748848956%2CMTVhOWUxMTkwNzhlMWUxYmZmMTlmMWUzNjhiNjVkZjUxMDIyMGQ4YTRkZTBlOWQyYWE2YmUxYTk5NDYwODhkYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mowx519ypiye1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1kdn8ev
/r/LocalLLaMA/comments/1kdn8ev/the_little_girl_and_her_bodyguard_youtub/
false
false
https://external-preview…e7909d26b2812f72
1
{'enabled': False, 'images': [{'id': 'b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=108&crop=smart&format=pjpg&auto=webp&s=54c66eaa769c7e47a637c1a2418483489fb31100', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=216&crop=smart&format=pjpg&auto=webp&s=f0c4f6a8638218b5ef75ce7407d42f4391176e86', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=320&crop=smart&format=pjpg&auto=webp&s=a622b1e3bbc1e567cffa0a89fd5d4a3678456bf9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=640&crop=smart&format=pjpg&auto=webp&s=52e53a3bfff67dfd8fff7d09e5f2e92e5dc4e20f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=960&crop=smart&format=pjpg&auto=webp&s=c998c3fe8b3fcb95be62e99566564020b185b995', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3de10022e019f8eeb5b22be87b6e5a3c368f68ac', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/b2k2Nmk2OXlwaXllMa1aiSopAvBha7bcPa64k2Wx2XOZhFR0KEIV9_BzknD3.png?format=pjpg&auto=webp&s=b4e644ec39b28b67844c49b1867133d12e795575', 'width': 1280}, 'variants': {}}]}
Help! Qwen 3 14B running too slow(1-2tk/s) on my laptop.
1
[removed]
2025-05-03T07:22:24
https://www.reddit.com/r/LocalLLaMA/comments/1kdn8is/help_qwen_3_14b_running_too_slow12tks_on_my_laptop/
twistywackiness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdn8is
false
null
t3_1kdn8is
/r/LocalLLaMA/comments/1kdn8is/help_qwen_3_14b_running_too_slow12tks_on_my_laptop/
false
false
self
1
null
How can i create a 24/7 coding agent?
1
[removed]
2025-05-03T07:22:38
https://www.reddit.com/r/LocalLLaMA/comments/1kdn8n7/how_can_i_create_a_247_coding_agent/
gdox200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdn8n7
false
null
t3_1kdn8n7
/r/LocalLLaMA/comments/1kdn8n7/how_can_i_create_a_247_coding_agent/
false
false
self
1
null
LLM with large context
0
What are some of your favorite LLMs to run locally with big context figures? Do we think its ever possible to hit 1M context locally in the next year or so?
2025-05-03T07:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1kdnbhj/llm_with_large_context/
CookieInstance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdnbhj
false
null
t3_1kdnbhj
/r/LocalLLaMA/comments/1kdnbhj/llm_with_large_context/
false
false
self
0
null
32B Q4 or 14B Q8 for coding
1
[removed]
2025-05-03T07:29:25
https://www.reddit.com/r/LocalLLaMA/comments/1kdnc2z/32b_q4_or_14b_q8_for_coding/
Over_Personality8171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdnc2z
false
null
t3_1kdnc2z
/r/LocalLLaMA/comments/1kdnc2z/32b_q4_or_14b_q8_for_coding/
false
false
self
1
null
Terminal agentic coders is not so useful
1
There are a lot of IDE based agentic coders like cursor, windsurf, (vscode+roocode/cline), which gives better interface. What is the use of terminal coder like codex from openai, claude code from anthropic ?
2025-05-03T07:37:34
https://www.reddit.com/r/LocalLLaMA/comments/1kdngak/terminal_agentic_coders_is_not_so_useful/
NovelNo2600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdngak
false
null
t3_1kdngak
/r/LocalLLaMA/comments/1kdngak/terminal_agentic_coders_is_not_so_useful/
false
false
self
1
null
OpenAI charged on my credit card without my permission. I hate them.
0
I know it is not quite related to LocalLLaMA, but upset about it & want to tell a warning to who use OpenAI API. I was using OpenAI API with prepaid balance. I never allowed automatic recharge, but they just charged unwanted amount $68 on my credit card without my consent. My colleague used batch API without cost estimation. It was stopped in the middle due to low balance (which is ok). But, it resulted in -$68 (which is not ok). I was surprised - how it is possible?. I never agreed to pay beyond my prepaid amount. I assumed it's their fault, so I ignored the negative balance & forgot. Two months later, today, they suddenly charged the minus balance on my credit card, without any notice or permission. I don't know how it is possible. I feel how bad they are. This isn’t the first time OpenAI made me upset. I was using OpenAI API a lot until last year. They suddenly expired my balance to $0. Since then, I only put small amount like few tens. Sigh, topping small amount is not safe too, they charge on the saved credit card without permission. Perhaps I will never pay OpenAI again. I don't expect them to be nice, but they shouldn't be bad as a business. I feel they are greedy. Already, not using OpenAI at all. I tried DeepSeek API, costed $2 for the same job. Also, using local DeepSeek, and other good open models. Wish we get even better true-open models.
2025-05-03T07:52:13
https://www.reddit.com/r/LocalLLaMA/comments/1kdnnms/openai_charged_on_my_credit_card_without_my/
smflx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdnnms
false
null
t3_1kdnnms
/r/LocalLLaMA/comments/1kdnnms/openai_charged_on_my_credit_card_without_my/
false
false
self
0
null
LLM running on VR headset
1
[removed]
2025-05-03T08:17:35
https://i.redd.it/q5fzh3x10jye1.jpeg
Extension_Plastic669
i.redd.it
1970-01-01T00:00:00
0
{}
1kdo0jw
false
null
t3_1kdo0jw
/r/LocalLLaMA/comments/1kdo0jw/llm_running_on_vr_headset/
false
false
https://a.thumbs.redditm…gWHrbnQ4X9d4.jpg
1
{'enabled': True, 'images': [{'id': 'C_ZnUAcwt7gYYZBAs_47STm1BNsP5ntYshNjCThtaW4', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=108&crop=smart&auto=webp&s=7fd99803516745e45a3a48504dfbb32ee59b28ae', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=216&crop=smart&auto=webp&s=b66823d6176a6720e3e89766eb2458c08bae95c9', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=320&crop=smart&auto=webp&s=75ae46d021bc636aac8cccc8899473b5f0fc57bc', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=640&crop=smart&auto=webp&s=9f2fa3647f15f485748268ebc2a7a461bd5e5e3f', 'width': 640}, {'height': 727, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=960&crop=smart&auto=webp&s=554083f98ba4f7523314f5ca129b5bf67cf8d910', 'width': 960}, {'height': 818, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?width=1080&crop=smart&auto=webp&s=7d24e4081b6b484d4dc615916484bfecf3928567', 'width': 1080}], 'source': {'height': 848, 'url': 'https://preview.redd.it/q5fzh3x10jye1.jpeg?auto=webp&s=59e343914427bcb8f974a5263184fa10bd634943', 'width': 1119}, 'variants': {}}]}
Hardware requirements for qwen3-30b-a3b? (At different quantizations)
6
Looking into a Local LLM for LLM related dev work (mostly RAG and MCP related). Anyone has any benchmarks for inference speed of qwen3-30b-a3b at Q4, Q8 and BF16 on different hardware? Currently have a single Nvidia RTX 4090, but am open to buying more 3090s or 4090s to run this at good speeds.
2025-05-03T08:26:05
https://www.reddit.com/r/LocalLLaMA/comments/1kdo4tf/hardware_requirements_for_qwen330ba3b_at/
AnEsportsFan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdo4tf
false
null
t3_1kdo4tf
/r/LocalLLaMA/comments/1kdo4tf/hardware_requirements_for_qwen330ba3b_at/
false
false
self
6
null
Launching qomplement: the first OS native AI agent
1
[removed]
2025-05-03T08:33:43
https://www.reddit.com/r/LocalLLaMA/comments/1kdo8sb/launching_qomplement_the_first_os_native_ai_agent/
kerimtaray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdo8sb
false
null
t3_1kdo8sb
/r/LocalLLaMA/comments/1kdo8sb/launching_qomplement_the_first_os_native_ai_agent/
false
false
self
1
null
Launching qomplement: the first OS native AI agent
1
[removed]
2025-05-03T08:38:03
https://www.reddit.com/r/LocalLLaMA/comments/1kdoazj/launching_qomplement_the_first_os_native_ai_agent/
Sad_Ad4916
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdoazj
false
null
t3_1kdoazj
/r/LocalLLaMA/comments/1kdoazj/launching_qomplement_the_first_os_native_ai_agent/
false
false
self
1
null
Qwen3 PAD token for IFT
1
[removed]
2025-05-03T08:39:49
https://www.reddit.com/r/LocalLLaMA/comments/1kdobvg/qwen3_pad_token_for_ift/
AdInevitable3609
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdobvg
false
null
t3_1kdobvg
/r/LocalLLaMA/comments/1kdobvg/qwen3_pad_token_for_ift/
false
false
self
1
null
Launching qomplement: the first OS native AI agent
0
qomplement ships today. It’s a native agent that learns complete GUI workflows from demonstration data, so you can ask for something open-ended—“Plan a weekend trip to SF, grab the cheapest round-trip and some cool tours”—and it handles vision, long-horizon reasoning, memory and UI control in one shot. There’s no prompt-tuning grind and no brittle script chain; each execution refines the model, so it keeps working even when the interface changes. Instead of relying on predefined rules or manual orchestration, qomplement is trained end-to-end on full interaction traces that pair what the user sees with what the agent does, letting it generalise across apps. That removes the maintenance overhead and fragility that plague classic RPA stacks and most current “agent frameworks.” One model books flights, edits slides, reconciles spreadsheets, then gets smarter after every run. [qomplement.com](http://qomplement.com)
2025-05-03T08:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1kdoc32/launching_qomplement_the_first_os_native_ai_agent/
kerimtaray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdoc32
false
null
t3_1kdoc32
/r/LocalLLaMA/comments/1kdoc32/launching_qomplement_the_first_os_native_ai_agent/
false
false
self
0
null
UI-Tars-1.5 reasoning never fails to entertain me.
1
2025-05-03T08:49:02
https://i.redd.it/dzf9ejur5jye1.png
Successful_Bowl2564
i.redd.it
1970-01-01T00:00:00
0
{}
1kdogd2
false
null
t3_1kdogd2
/r/LocalLLaMA/comments/1kdogd2/uitars15_reasoning_never_fails_to_entertain_me/
false
false
https://external-preview…15e2d59e2e5d251b
1
{'enabled': True, 'images': [{'id': 'n0VJIBD7SMNgIPSjeUyHXUF8Hhz6aa8V8SIwlGAvAxo', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/dzf9ejur5jye1.png?width=108&crop=smart&auto=webp&s=faebe52d0df13225122ad95866d982b51b0cd685', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/dzf9ejur5jye1.png?width=216&crop=smart&auto=webp&s=e93f0aa115b913ab8446a6e3a596a02218af46bb', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/dzf9ejur5jye1.png?width=320&crop=smart&auto=webp&s=af3fe1139375101f64e2c8e3ffa4d9fcde0ce0a6', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/dzf9ejur5jye1.png?width=640&crop=smart&auto=webp&s=3817772ca8637cc531c15742f5053287092e6a82', 'width': 640}], 'source': {'height': 466, 'url': 'https://preview.redd.it/dzf9ejur5jye1.png?auto=webp&s=d5e71b92342544b8eb06c6884d95f45e24462bc8', 'width': 729}, 'variants': {}}]}
Looking for a Fast and Emotionally Expressive Open-Source TTS Model
1
[removed]
2025-05-03T09:04:52
https://www.reddit.com/r/LocalLLaMA/comments/1kdoob5/looking_for_a_fast_and_emotionally_expressive/
ConnectPea8944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdoob5
false
null
t3_1kdoob5
/r/LocalLLaMA/comments/1kdoob5/looking_for_a_fast_and_emotionally_expressive/
false
false
self
1
null
How to offload all moe layers to cpu in oogabooga?
1
[removed]
2025-05-03T09:05:53
https://www.reddit.com/r/LocalLLaMA/comments/1kdoou0/how_to_offload_all_moe_layers_to_cpu_in_oogabooga/
Slow_Comfort_2510
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdoou0
false
null
t3_1kdoou0
/r/LocalLLaMA/comments/1kdoou0/how_to_offload_all_moe_layers_to_cpu_in_oogabooga/
false
false
self
1
null
Looking for a Fast and Emotionally Expressive Open-Source TTS Model
1
[removed]
2025-05-03T09:06:49
https://www.reddit.com/r/LocalLLaMA/comments/1kdopay/looking_for_a_fast_and_emotionally_expressive/
ConnectPea8944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdopay
false
null
t3_1kdopay
/r/LocalLLaMA/comments/1kdopay/looking_for_a_fast_and_emotionally_expressive/
false
false
self
1
null
A new Study Claims LMArena Allows Big AI Labs to Game the Leaderboard
1
[removed]
2025-05-03T09:10:18
[deleted]
1970-01-01T00:00:00
0
{}
1kdor01
false
null
t3_1kdor01
/r/LocalLLaMA/comments/1kdor01/a_new_study_claims_lmarena_allows_big_ai_labs_to/
false
false
default
1
null
New Study Claims Chatbot Arena Allows Big AI Labs to Game the Benchmark
1
[removed]
2025-05-03T09:14:20
[deleted]
1970-01-01T00:00:00
0
{}
1kdosy1
false
null
t3_1kdosy1
/r/LocalLLaMA/comments/1kdosy1/new_study_claims_chatbot_arena_allows_big_ai_labs/
false
false
default
1
null
New Study Claims Leaderboard Allows the Big AI Labs to Game the Benchmark
1
[removed]
2025-05-03T09:15:07
[deleted]
1970-01-01T00:00:00
0
{}
1kdotbg
false
null
t3_1kdotbg
/r/LocalLLaMA/comments/1kdotbg/new_study_claims_leaderboard_allows_the_big_ai/
false
false
default
1
null
New Study Claims Chatbot Arena Allows Big AI Labs to Game the Leaderboard!
1
[removed]
2025-05-03T09:17:35
https://www.reddit.com/r/LocalLLaMA/comments/1kdoujw/new_study_claims_chatbot_arena_allows_big_ai_labs/
king_malebolgia_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdoujw
false
null
t3_1kdoujw
/r/LocalLLaMA/comments/1kdoujw/new_study_claims_chatbot_arena_allows_big_ai_labs/
false
false
self
1
null
New Study Shows The Popular Benchmark Allows Big AI Labs to Game the Rankings!
1
[removed]
2025-05-03T09:20:56
https://www.reddit.com/r/LocalLLaMA/comments/1kdow6i/new_study_shows_the_popular_benchmark_allows_big/
seytandiablo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdow6i
false
null
t3_1kdow6i
/r/LocalLLaMA/comments/1kdow6i/new_study_shows_the_popular_benchmark_allows_big/
false
false
self
1
null
AI Fossil Record
1
2025-05-03T09:20:58
https://i.redd.it/buvmkktjbjye1.png
CommodoreCarbonate
i.redd.it
1970-01-01T00:00:00
0
{}
1kdow6y
false
null
t3_1kdow6y
/r/LocalLLaMA/comments/1kdow6y/ai_fossil_record/
false
false
https://a.thumbs.redditm…NHHpw1rIvUz4.jpg
1
{'enabled': True, 'images': [{'id': 'oIwNzf6TvULeVmp2TH1tBy23Q21NzNKWzh9Bv5cnNw0', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/buvmkktjbjye1.png?width=108&crop=smart&auto=webp&s=74a724dd7d52e79d11cb5b22d10d7d04b6807e7f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/buvmkktjbjye1.png?width=216&crop=smart&auto=webp&s=5ed186faf638f07cc525151a5cf27adef62caee2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/buvmkktjbjye1.png?width=320&crop=smart&auto=webp&s=0a7daeb0c3a49be08aefb150b3f1110995fd9bd7', 'width': 320}], 'source': {'height': 1192, 'url': 'https://preview.redd.it/buvmkktjbjye1.png?auto=webp&s=a206facdb04d6dc9bb805481e684af7c64096373', 'width': 500}, 'variants': {}}]}
Recommended models for focus on dialogue?
2
I'm looking for a model that focus on dialogue, and not so much on creating stories. It is going to be used to feed bots inside a WoW private server, so generating thoughts, meta-comments, etc... is not needed. If the training model used data or models that contain information about WoW, even better. They know in which area they are, which class, level... and have their character cards generated that can be modified, so the models needs to also understand context and prompts properly.
2025-05-03T09:48:19
https://www.reddit.com/r/LocalLLaMA/comments/1kdp9mp/recommended_models_for_focus_on_dialogue/
Chimpampin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdp9mp
false
null
t3_1kdp9mp
/r/LocalLLaMA/comments/1kdp9mp/recommended_models_for_focus_on_dialogue/
false
false
self
2
null
Mistral-Small-3.1-24B-Instruct-2503 <32b UGI scores
86
It's been there for some time and I wonder why is nobody talking about it. I mean, from the handful of models that have a higher UGI score, all of them have lower natint and coding scores. Looks to me like an ideal choice for uncensored single-gpu inference? Plus, it supports tool usage. Am I missing something? :)
2025-05-03T10:08:00
https://i.redd.it/brtajevzjjye1.jpeg
Hujkis9
i.redd.it
1970-01-01T00:00:00
0
{}
1kdpjuz
false
null
t3_1kdpjuz
/r/LocalLLaMA/comments/1kdpjuz/mistralsmall3124binstruct2503_32b_ugi_scores/
false
false
https://external-preview…cdca6baf4604dc79
86
{'enabled': True, 'images': [{'id': 'nC9RDlm7-UWHYo-E8-SegylazGfP3n9SGg-q9QxgMFQ', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=108&crop=smart&auto=webp&s=6ff4b54dcb6381a303185a634d4e32fd0b3101a4', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=216&crop=smart&auto=webp&s=40286fdc9fed6c39b3bb728d517cbd6b3b97980e', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=320&crop=smart&auto=webp&s=25fbf7185b36a8ae7a9910aff7c06f96c67d6a48', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=640&crop=smart&auto=webp&s=dc88e9798928b0e6c47dd1e936f8e2ecad4c4d28', 'width': 640}, {'height': 646, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=960&crop=smart&auto=webp&s=52808984180cacc8ab21ff20dfdc028541b9b927', 'width': 960}, {'height': 727, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?width=1080&crop=smart&auto=webp&s=1279287f8e01f3916ce542af7d1db05cdbecc0d2', 'width': 1080}], 'source': {'height': 1487, 'url': 'https://preview.redd.it/brtajevzjjye1.jpeg?auto=webp&s=39f24627781b09dd66b47e1f73a268539fe70eb1', 'width': 2208}, 'variants': {}}]}
aider polyglot - individual language results
8
the polyglot benchmarks give a combined result over different languages. is there published anywhere a breakdown of these by language. the reason is if i'm looking for a model to work on a particular language, i want to see which is the best for that specific language.
2025-05-03T10:10:53
https://www.reddit.com/r/LocalLLaMA/comments/1kdplei/aider_polyglot_individual_language_results/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdplei
false
null
t3_1kdplei
/r/LocalLLaMA/comments/1kdplei/aider_polyglot_individual_language_results/
false
false
self
8
null
I trained a Language Model to schedule events with GRPO! (full project inside)
71
I experimented with GRPO lately. I am fascinated by models learning from prompts and rewards - no example answers needed like in Supervised Fine-Tuning. After the DeepSeek boom, everyone is trying GRPO with GSM8K or the Countdown Game... I wanted a different challenge, like ***teaching a model to create a schedule from a list of events and priorities***. Choosing an original problem forced me to: 🤔 Think about the problem setting 🧬 Generate data 🤏 Choose the right base model 🏆 Design reward functions 🔄 Run multiple rounds of training, hoping that my model would learn something. A fun and rewarding 😄 experience. I learned a lot of things, that I want to share with you. 👇 ✍️ Blog post: [https://huggingface.co/blog/anakin87/qwen-scheduler-grpo](https://huggingface.co/blog/anakin87/qwen-scheduler-grpo) 💻 Code: [https://github.com/anakin87/qwen-scheduler-grpo](https://github.com/anakin87/qwen-scheduler-grpo) 🤗 Hugging Face collection (dataset and model): [https://huggingface.co/collections/anakin87/qwen-scheduler-grpo-680bcc583e817390525a8837](https://huggingface.co/collections/anakin87/qwen-scheduler-grpo-680bcc583e817390525a8837) 🔥 Some hot takes from my experiment: * GRPO is cool for verifiable tasks, but is more about eliciting desired behaviors from the trained model than teaching completely new stuff to it. * Choosing the right base model (and size) matters. * "Aha moment" might be over-hyped. * Reward functions design is crucial. If your rewards are not robust, you might experience reward hacking (as it happened to me). * Unsloth is great for saving GPU, but beware of bugs.
2025-05-03T10:34:41
https://i.redd.it/zjcglrbjnjye1.gif
anakin_87
i.redd.it
1970-01-01T00:00:00
0
{}
1kdpy20
false
null
t3_1kdpy20
/r/LocalLLaMA/comments/1kdpy20/i_trained_a_language_model_to_schedule_events/
false
false
https://b.thumbs.redditm…merVqmUilqyw.jpg
71
{'enabled': True, 'images': [{'id': 'IF2FGv6IR4iIlZKdXfFeFDEDJApHROZ3mEcgXzau3DQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=108&crop=smart&format=png8&s=1466693cc132b37797d64e43f0eee1f94d898213', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=216&crop=smart&format=png8&s=4d90bf92aaaefa032e22a9f7ae81f0e665255956', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=320&crop=smart&format=png8&s=db8a45c9da249466d73de5e1c9311243e23b2696', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=640&crop=smart&format=png8&s=512669c529077fc02be65ca96159bd580228d920', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=960&crop=smart&format=png8&s=23b1bf3b5c709860d1daf5f19c0be8caf90bda81', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?format=png8&s=97cee5c128f304cf7778c55a22420d248b8f0d88', 'width': 1024}, 'variants': {'gif': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=108&crop=smart&s=98bbcd6f3c284a18fad60dd93d81ba163aaa9490', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=216&crop=smart&s=e1b5cb324399d532178e6cbd251a7b6a9275ff07', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=320&crop=smart&s=3f94c53f4feddaae20a4ca04d6052fc6e51bba42', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=640&crop=smart&s=365257273a6c8ebb24c3d9d5ca79c52df9d4a9e2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=960&crop=smart&s=564faf2370664873470c90f89eac5ab652f29b59', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?s=1a1b8bbbbcd5d6cd03443514ace0d54c19438413', 'width': 1024}}, 'mp4': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=108&format=mp4&s=253e7e3669586544d9f12098e8bceedacf26dbec', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=216&format=mp4&s=c1f84b338a9940c1d299b992906372ca708227c2', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=320&format=mp4&s=0355be4b74515ff9ca9298409260ae076459ccb6', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=640&format=mp4&s=1121de1c885b23257308d60651a82ae2f0ddf47e', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?width=960&format=mp4&s=4a51c08f8f9e3ae619a2c86422cbe2e28e86f267', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/zjcglrbjnjye1.gif?format=mp4&s=4fc260a43886692aea9684cf81b41bbdc8b1e1b1', 'width': 1024}}}}]}
Curious about the JOSIEFIED versions of models on Ollama—are they safe?
1
[removed]
2025-05-03T10:39:11
https://www.reddit.com/r/LocalLLaMA/comments/1kdq0k5/curious_about_the_josiefied_versions_of_models_on/
KrazyHomosapien
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdq0k5
false
null
t3_1kdq0k5
/r/LocalLLaMA/comments/1kdq0k5/curious_about_the_josiefied_versions_of_models_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Why Llama3.2 asking somebody for help?
1
[removed]
2025-05-03T10:43:05
https://www.reddit.com/r/LocalLLaMA/comments/1kdq2rh/why_llama32_asking_somebody_for_help/
Loud_Importance_8023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdq2rh
false
null
t3_1kdq2rh
/r/LocalLLaMA/comments/1kdq2rh/why_llama32_asking_somebody_for_help/
false
false
self
1
null
Website for LLM Pricing
1
[removed]
2025-05-03T10:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1kdq5ex/website_for_llm_pricing/
Reception_Super
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdq5ex
false
null
t3_1kdq5ex
/r/LocalLLaMA/comments/1kdq5ex/website_for_llm_pricing/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HUR4ZjSsMcPldBF8PlxclI3gg-mjZXBfe4bNavSwrFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=108&crop=smart&auto=webp&s=4c05659da71aabefa650df1fddb91bdf8888031d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=216&crop=smart&auto=webp&s=490f434fbbbf0f74a171e943297e61758633f730', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=320&crop=smart&auto=webp&s=6f57ef706f7fd8fd0484669113c189fba8da9198', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=640&crop=smart&auto=webp&s=5d72bc65c67e8fa81fbd23e548bba69e1a0bb3e8', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=960&crop=smart&auto=webp&s=d13c9867058e25865b57356a8f76e4c2df202a84', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=1080&crop=smart&auto=webp&s=84f7f12718fed77976904df46b50b7aeb1a2af03', 'width': 1080}], 'source': {'height': 629, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?auto=webp&s=0e18f26214a09b566dc3bc4bcdd70b1cf41d959a', 'width': 1200}, 'variants': {}}]}
What is the best local LLM that can produce NSFW stories? (including incest)
1
[removed]
2025-05-03T10:51:26
https://www.reddit.com/r/LocalLLaMA/comments/1kdq79w/what_is_the_best_local_llm_that_can_produce_nsfw/
Majormanager2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdq79w
false
null
t3_1kdq79w
/r/LocalLLaMA/comments/1kdq79w/what_is_the_best_local_llm_that_can_produce_nsfw/
false
false
nsfw
1
null
Teaching LLMs to use tools with RL! Successfully trained 0.5B/3B Qwen models to use a calculator tool 🔨
130
**👋 I recently had great fun training small language models (Qwen2.5 0.5B & 3B) to use a slightly complex calculator syntax through multi-turn reinforcement learning. Results were pretty cool: the 3B model went from 27% to 89% accuracy!** **What I did:** * Built a custom environment where model's output can be parsed & calculated * Used Claude-3.5-Haiku as a reward model judge + software verifier * Applied GRPO for training * Total cost: \~$40 (\~£30) on rented GPUs **Key results:** * Qwen 0.5B: 0.6% → 34% accuracy (+33 points) * Qwen 3B: 27% → 89% accuracy (+62 points) **Technical details:** * The model parses nested operations like: "What's the sum of 987 times 654, and 987 divided by the total of 321 and 11?" * Uses XML/YAML format to structure calculator calls * Rewards combine LLM judging + code verification * 1 epoch training with 8 samples per prompt My [Github repo](https://github.com/Danau5tin/calculator_agent_rl) has way more technical details if you're interested! **Models are now on HuggingFace:** * [Qwen 2.5 0.5B Calculator Agent](https://huggingface.co/Dan-AiTuning/calculator_agent_qwen2.5_0.5b) * [Qwen 2.5 3B Calculator Agent](https://huggingface.co/Dan-AiTuning/calculator_agent_qwen2.5_3b) Thought I'd share because I believe the future may tend toward multi-turn RL with tool use agentic LLMs at the center. (Built using the [Verifiers](https://github.com/willccbb/verifiers) RL framework - It is a fantastic repo! Although not quite ready for prime time, it was extremely valuable)
2025-05-03T11:00:49
https://www.reddit.com/gallery/1kdqcjk
DanAiTuning
reddit.com
1970-01-01T00:00:00
0
{}
1kdqcjk
false
null
t3_1kdqcjk
/r/LocalLLaMA/comments/1kdqcjk/teaching_llms_to_use_tools_with_rl_successfully/
false
false
https://external-preview…6fddee3137a435f3
130
{'enabled': True, 'images': [{'id': 'p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=108&crop=smart&auto=webp&s=5dcfbc4addf367b21f6037c959c0992a64baf2a4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=216&crop=smart&auto=webp&s=8fe36212c2f5ceee0e40cd687b393b84c310ddde', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=320&crop=smart&auto=webp&s=f66573791ec5020010dd47bd32e8976263d5961e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=640&crop=smart&auto=webp&s=f79f79565267c758b305dee548b6ca5439f9e39a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=960&crop=smart&auto=webp&s=269e3251eb6c6f3cefce551085e6aed8eeba5c29', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?width=1080&crop=smart&auto=webp&s=8e1be6124219b4d743a700969b2f6e8ba4bdd53e', 'width': 1080}], 'source': {'height': 2656, 'url': 'https://external-preview.redd.it/p1q0NLBPvDKer4wLl6MMJh8XzyT5jyGdKYucra-ZJAU.png?auto=webp&s=e5dbe7d24cc207fe3419917f5896cf4a03072299', 'width': 5056}, 'variants': {}}]}
Are TTS models with voice cloning features inherently slower?
1
[removed]
2025-05-03T11:05:26
https://www.reddit.com/r/LocalLLaMA/comments/1kdqfbs/are_tts_models_with_voice_cloning_features/
ConnectPea8944
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdqfbs
false
null
t3_1kdqfbs
/r/LocalLLaMA/comments/1kdqfbs/are_tts_models_with_voice_cloning_features/
false
false
self
1
null
What is the best local LLM that can produce NSFW stories? (including incest)
1
[removed]
2025-05-03T11:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1kdqfi1/what_is_the_best_local_llm_that_can_produce_nsfw/
Majormanager2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdqfi1
false
null
t3_1kdqfi1
/r/LocalLLaMA/comments/1kdqfi1/what_is_the_best_local_llm_that_can_produce_nsfw/
false
false
nsfw
1
null
What are you guys currently using as a general use LLM?
1
[removed]
2025-05-03T11:06:05
https://www.reddit.com/r/LocalLLaMA/comments/1kdqfox/what_are_you_guys_currently_using_as_a_general/
1234filip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdqfox
false
null
t3_1kdqfox
/r/LocalLLaMA/comments/1kdqfox/what_are_you_guys_currently_using_as_a_general/
false
false
self
1
null
Qwen3 8b on android (it's not half bad)
105
A while ago, I decided to buy a phone with a Snapdragon 8 Gen 3 SoC. Naturally, I wanted to push it beyond basic tasks and see how well it could handle local LLMs. I set up [ChatterUI](https://github.com/Vali-98/ChatterUI), imported a model, and asked it a question. It took 101 seconds to respond— which is not bad at all, considering the model is typically designed for use on desktop GPUs. --- And that brings me to the following question: what other models around this size (11B or lower) would you guys recommend?, did anybody else try this ? The one I tested seems decent for general Q&A, but it's pretty bad at roleplay. I'd really appreciate any suggestions for roleplay/translation/coding models that can work as efficiently. Thank you!
2025-05-03T11:10:50
https://i.redd.it/54hu64e7vjye1.jpeg
SofeyKujo
i.redd.it
1970-01-01T00:00:00
0
{}
1kdqibi
false
null
t3_1kdqibi
/r/LocalLLaMA/comments/1kdqibi/qwen3_8b_on_android_its_not_half_bad/
false
false
https://external-preview…6919cb94976fd2fb
105
{'enabled': True, 'images': [{'id': 'UFodhob2hQDciMevuWed7YHSf7ljnXFNd4RO86MdovE', 'resolutions': [{'height': 203, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=108&crop=smart&auto=webp&s=29160579dbc1d625a490bd86a4f69354d471c9e1', 'width': 108}, {'height': 407, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=216&crop=smart&auto=webp&s=811bed4e01bd98f048522ff3df732d35a9e1d93d', 'width': 216}, {'height': 603, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=320&crop=smart&auto=webp&s=2beb6e054c2a439c2ba00d85c309c2b2fe0be421', 'width': 320}, {'height': 1207, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=640&crop=smart&auto=webp&s=0c65d6be61f794ea572b3b96b13c76ebaf908d18', 'width': 640}, {'height': 1811, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=960&crop=smart&auto=webp&s=14cd7d1d79a9067856950a3bff1852c5726ed0d6', 'width': 960}, {'height': 2038, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?width=1080&crop=smart&auto=webp&s=af9354ecabcd305337ab650e78f9b018f2e266cf', 'width': 1080}], 'source': {'height': 2378, 'url': 'https://preview.redd.it/54hu64e7vjye1.jpeg?auto=webp&s=2c414b32a89ddd93dc884874850cd22533fd733e', 'width': 1260}, 'variants': {}}]}
Qwen3-235B-A22B (no thinking) Seemingly Outperforms Claude 3.7 with 32k Thinking Tokens in Coding (Aider)
410
Came across this benchmark PR on Aider I did my own benchmarks with aider and had consistent results This is just impressive... PR: [https://github.com/Aider-AI/aider/pull/3908/commits/015384218f9c87d68660079b70c30e0b59ffacf3](https://github.com/Aider-AI/aider/pull/3908/commits/015384218f9c87d68660079b70c30e0b59ffacf3) Comment: [https://github.com/Aider-AI/aider/pull/3908#issuecomment-2841120815](https://github.com/Aider-AI/aider/pull/3908#issuecomment-2841120815)
2025-05-03T11:25:40
https://www.reddit.com/r/LocalLLaMA/comments/1kdqqkp/qwen3235ba22b_no_thinking_seemingly_outperforms/
Greedy_Letterhead155
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdqqkp
false
null
t3_1kdqqkp
/r/LocalLLaMA/comments/1kdqqkp/qwen3235ba22b_no_thinking_seemingly_outperforms/
false
false
self
410
{'enabled': False, 'images': [{'id': 'GfbfVD7-Yoyq_ZANh5Du9YN8bpOMaQiy3IQRRns6u2E', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/KJtSf9dWVwVmDA-yet3p3vtGffURoe5XXJYK41bF5p0.jpg?width=108&crop=smart&auto=webp&s=d64992e34734e9ee15594bbd76fbac785389f51f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/KJtSf9dWVwVmDA-yet3p3vtGffURoe5XXJYK41bF5p0.jpg?width=216&crop=smart&auto=webp&s=0d27d9129e019fd29b4d71723df77478b6151c81', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/KJtSf9dWVwVmDA-yet3p3vtGffURoe5XXJYK41bF5p0.jpg?width=320&crop=smart&auto=webp&s=bbbaee4e79132434d5d3aa422c8712b82cad48fd', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/KJtSf9dWVwVmDA-yet3p3vtGffURoe5XXJYK41bF5p0.jpg?auto=webp&s=4490eccd8004b6a92b9cef4f723cfdb077a6bd17', 'width': 400}, 'variants': {}}]}
What is the best LOCAL, OFFLINE and FREE AI generation tools right now?
0
For images and videos. Thank you.
2025-05-03T11:46:24
https://www.reddit.com/r/LocalLLaMA/comments/1kdr2uw/what_is_the_best_local_offline_and_free_ai/
Anto444_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdr2uw
false
null
t3_1kdr2uw
/r/LocalLLaMA/comments/1kdr2uw/what_is_the_best_local_offline_and_free_ai/
false
false
self
0
null
360GB of VRAM. What model(s) would you serve and why?
1
FP8/Q8 quantization. Open discussion. What models do you choose? Context size? Use case? Number of people using it? What are you using to serve the model?
2025-05-03T11:47:22
https://www.reddit.com/r/LocalLLaMA/comments/1kdr3eu/360gb_of_vram_what_models_would_you_serve_and_why/
ICanSeeYou7867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdr3eu
false
null
t3_1kdr3eu
/r/LocalLLaMA/comments/1kdr3eu/360gb_of_vram_what_models_would_you_serve_and_why/
false
false
self
1
null
Keep dancing #beauti #lifeisbutadream #whowillbemylifepartnerta #fashion...
1
2025-05-03T11:56:43
https://youtube.com/watch?v=Dpv7lga8yNs&si=pLk8xUJ7NfEHhTr8
Ok-Maize-4629
youtube.com
1970-01-01T00:00:00
0
{}
1kdr8zy
false
{'oembed': {'author_name': 'Ada', 'author_url': 'https://www.youtube.com/@Ada-y5f6o', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Dpv7lga8yNs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Keep dancing #beauti #lifeisbutadream #whowillbemylifepartnerta #fashion #dance"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Dpv7lga8yNs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Keep dancing #beauti #lifeisbutadream #whowillbemylifepartnerta #fashion #dance', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kdr8zy
/r/LocalLLaMA/comments/1kdr8zy/keep_dancing_beauti_lifeisbutadream/
false
false
https://b.thumbs.redditm…sswvR973k24g.jpg
1
{'enabled': False, 'images': [{'id': 'r1Pig8lyWC3IOhg_Hph2FI3tPkw0tHqKGmf9iRpiH2U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tEeUmoq4Eu6-SV7nq6XhuKMFvCqLrAqzleCR73cnRS0.jpg?width=108&crop=smart&auto=webp&s=1009c704171c81e0a112b452ab7f0977004e17fd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tEeUmoq4Eu6-SV7nq6XhuKMFvCqLrAqzleCR73cnRS0.jpg?width=216&crop=smart&auto=webp&s=8d289840046f3bd66ff2d01207b3ee7adb5827d6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tEeUmoq4Eu6-SV7nq6XhuKMFvCqLrAqzleCR73cnRS0.jpg?width=320&crop=smart&auto=webp&s=e1237f9808a8a8d1707b83763b3eb97a764b6c41', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tEeUmoq4Eu6-SV7nq6XhuKMFvCqLrAqzleCR73cnRS0.jpg?auto=webp&s=5ca061f37f4bc2bba665a415b7e522bbda1804f4', 'width': 480}, 'variants': {}}]}
Dia-JAX – Run a 1.6B Text-to-Speech Model on TPU with JAX
22
[JAX port of the Dia TTS model](https://github.com/jaco-bro/diajax) from Nari Labs for inference on any machine. ``` pip install diajax==0.0.7 dia --text "Hey, I'm really sorry for getting back to you so late. (cough) But voice cloning is just super easy, it's barely an inconvenience at all. I will show you how." --audio "assets/example_prompt.mp3" ```
2025-05-03T12:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1kdre6g/diajax_run_a_16b_texttospeech_model_on_tpu_with/
Due-Yoghurt2093
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kdre6g
false
null
t3_1kdre6g
/r/LocalLLaMA/comments/1kdre6g/diajax_run_a_16b_texttospeech_model_on_tpu_with/
false
false
self
22
{'enabled': False, 'images': [{'id': 'GWi686FbGem5Ask2G6IvP3QnGq5qQIib9AMBZRELlmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=108&crop=smart&auto=webp&s=0a4486e57719afa6e40dd2078070e94af26c3330', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=216&crop=smart&auto=webp&s=24fa271d5f7c18e13b28312798a74209f109daf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=320&crop=smart&auto=webp&s=210a31d950a0ed3b81e6bf7bddd5258fd37b011b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=640&crop=smart&auto=webp&s=5657a798278a960e06e24cce16d59a76c1cc3eec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=960&crop=smart&auto=webp&s=2d43bbf2c3b3d7b687e972df1b3a960e5bb06cd1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?width=1080&crop=smart&auto=webp&s=22482d82083f3704fef357fe0e1f18cbf0f498d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dnaQdJTlE8agIdU5SVo4tOWOeByImyWRL3CZXimcLHg.jpg?auto=webp&s=575d979066605500bac14ae2711c6cfceb7ad1d5', 'width': 1200}, 'variants': {}}]}