title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to build a Scalable System to Scrape & Enrich 300M LinkedIn Profiles with LLMs
| 1 |
[removed]
| 2025-04-19T18:34:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k331jx/how_to_build_a_scalable_system_to_scrape_enrich/
|
Dreamer_made
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k331jx
| false | null |
t3_1k331jx
|
/r/LocalLLaMA/comments/1k331jx/how_to_build_a_scalable_system_to_scrape_enrich/
| false | false |
self
| 1 | null |
No gradients for projection layer?
| 3 |
I am currently trying to make a custom MLLM with llama 3.2 1B and a BEATs audio encoder.
I utilize huggingface, and the AutoModelforCausalLM class. I have confirmed that my embeds are set to require grads, and they are in torch.float32 type. I am forced to input both input\_id and input\_embed, (this is a requirement of AutoModel, for some reason), and my loss is directly calculated through the model by passing the labels in directly.
When I check the grads of my projection layer, it says that grads are None. The projection layer is arguably the most important part though! I have tried searching for many hours, and I have tried to discuss with gemini for hours, but to no avail.
My suspicion is that the model does not correctly use the input\_embed parameter to calculate the internal loss function, and is relying on difference between input ID's, but I'm not sure that truly makes sense if the embeds are part of the graph and they are \*actually\* used in the model.
I do have a project that had been posted on here with mistral and whisper, but I can't copy their code, and I would still like to know and understand specifically why my architecture cannot pass gradient updates to my projection layer.
Anyone have any tips on this?
| 2025-04-19T18:55:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k33il7/no_gradients_for_projection_layer/
|
IsGoIdMoney
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k33il7
| false | null |
t3_1k33il7
|
/r/LocalLLaMA/comments/1k33il7/no_gradients_for_projection_layer/
| false | false |
self
| 3 | null |
Other Ways To Quickly Finetune?
| 18 |
Hello, I want to train Llama 3.2 3B on my dataset with 19k rows. It already has been cleaned originally had 2xk. But finetuning on unsloth free tier takes 9 to 11 hours! My free tier cannot last that long since it only offers 3 hours or so. I'm considering buying compute units, or use vast or runpod, but I might as well ask you guys if theres any other way to finetune this faster before I spend money
I am using Unsloth already.
| 2025-04-19T19:01:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k33niu/other_ways_to_quickly_finetune/
|
AccomplishedAir769
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k33niu
| false | null |
t3_1k33niu
|
/r/LocalLLaMA/comments/1k33niu/other_ways_to_quickly_finetune/
| false | false |
self
| 18 | null |
TinyLlama model is engaging in conversations with itself.
| 1 |
[removed]
| 2025-04-19T19:49:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k34p5h/tinyllama_model_is_engaging_in_conversations_with/
|
Aggressive_Toe_3117
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k34p5h
| false | null |
t3_1k34p5h
|
/r/LocalLLaMA/comments/1k34p5h/tinyllama_model_is_engaging_in_conversations_with/
| false | false |
self
| 1 | null |
Which open source Manus like system???
| 3 |
Ok,
So like open manner versus pocket madness versus anus vs computer use vs autoMATE?
Thoughts, feelings, ease of use?
| 2025-04-19T20:24:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k35gfi/which_open_source_manus_like_system/
|
AdLongjumping192
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k35gfi
| false | null |
t3_1k35gfi
|
/r/LocalLLaMA/comments/1k35gfi/which_open_source_manus_like_system/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': '9QL6w9tpSwnoa12bXkBVLulOKLu8jKdMrzP8WOVjJZE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=108&crop=smart&auto=webp&s=95fc81c070e98965d686c6703c1c0c4f394b606c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=216&crop=smart&auto=webp&s=7f926df080938261e977d5f374025ff8f2743eb9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=320&crop=smart&auto=webp&s=5262dc873160b687f8473344230599095879730b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=640&crop=smart&auto=webp&s=16b21b0a00e45cde6826a86840f9a13cea2795d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=960&crop=smart&auto=webp&s=3b1a58909f524fa0646ee14fbf369ac4f6a74546', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?width=1080&crop=smart&auto=webp&s=3048b06c306a06c6f71d761c118cf4ba2fbe82b7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VD-P7EUqrzLwXb3_gU865NBrUhqtJmYeXtcB8vki1iU.jpg?auto=webp&s=2e69ca8d59da8734b680aa65b182ded6b34e873a', 'width': 1200}, 'variants': {}}]}
|
Fine-tuning LLMs to 1.58bit: extreme quantization experiment
| 75 |
[https://github.com/huggingface/blog/blob/main/1\_58\_llm\_extreme\_quantization.md](https://github.com/huggingface/blog/blob/main/1_58_llm_extreme_quantization.md)
[https://huggingface.co/blog/1\_58\_llm\_extreme\_quantization](https://huggingface.co/blog/1_58_llm_extreme_quantization)
| 2025-04-19T20:29:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k35kh5/finetuning_llms_to_158bit_extreme_quantization/
|
shing3232
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k35kh5
| false | null |
t3_1k35kh5
|
/r/LocalLLaMA/comments/1k35kh5/finetuning_llms_to_158bit_extreme_quantization/
| false | false |
self
| 75 |
{'enabled': False, 'images': [{'id': 'q2yadIqlBw8OXSeb-v6LRgaj5wDggjPrzfBeQr6V7jQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=108&crop=smart&auto=webp&s=13bdff9943b1f17c52ed7fd4dbac5e63f614c8a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=216&crop=smart&auto=webp&s=fceee3bbaf08ad9eac66872d62e17730f3a40011', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=320&crop=smart&auto=webp&s=6450de659945b3c20b879479dca833bf2e70994d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=640&crop=smart&auto=webp&s=a27af32598445bc9ea897f8b81b1d8396a91f3f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=960&crop=smart&auto=webp&s=1843046c121c5ac4b56beb43a85b0478ec57fbe1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?width=1080&crop=smart&auto=webp&s=dccfc9b462e0152b7b49d853e56acf38e4eb639e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bo1Gddc2yF-R5c_0gSlNRp5MCEYLqEXQ19saTA2zPr4.jpg?auto=webp&s=da3aa5ab1fc4a926ae639288b378c5840768c00a', 'width': 1200}, 'variants': {}}]}
|
FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively. (Local video gen model)
| 164 | 2025-04-19T20:35:17 |
https://lllyasviel.github.io/frame_pack_gitpage/
|
InsideYork
|
lllyasviel.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1k35orj
| false | null |
t3_1k35orj
|
/r/LocalLLaMA/comments/1k35orj/framepack_is_a_nextframe_nextframesection/
| false | false |
default
| 164 | null |
|
Lite weight No limit LLM
| 0 |
So i have a 16gb ram on my pc what would be the best light weight no restriction LLM
| 2025-04-19T20:57:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k365r9/lite_weight_no_limit_llm/
|
Grouchy-Tailor-2556
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k365r9
| false | null |
t3_1k365r9
|
/r/LocalLLaMA/comments/1k365r9/lite_weight_no_limit_llm/
| false | false |
self
| 0 | null |
gemma3:4b performance on 5900HX (no discrete GPU) 16gb RAM vs rpi 4b 8gb RAM vs 3070ti.
| 6 |
Hello,
I am trying to setup gemma3:4b on a Ryzen 5900HX VM (VM is setup with all 16 threads/core) and 16GB ram. Without the gpu it performs OCR on an image in around 9mins. I was surprised to see that it took around 11 mins on an rpi4b. I know cpus are really slow compared to GPU for llms (my rtx 3070 ti laptop responds in 3-4 seconds) but 5900HX is no slouch compared to a rpi. I am wondering why they both take almost the same time. Do you think I am missing any configuration?
btop on the VM host shows 100% CPU usage on all 16 threads. It's the same for rpi.
| 2025-04-19T21:10:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k36g4c/gemma34b_performance_on_5900hx_no_discrete_gpu/
|
fynadvyce
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k36g4c
| false | null |
t3_1k36g4c
|
/r/LocalLLaMA/comments/1k36g4c/gemma34b_performance_on_5900hx_no_discrete_gpu/
| false | false |
self
| 6 | null |
What's the current state of federated learning for large language models?
| 12 |
Hi everyone,
I'm curious about the current progress in using federated learning with large language models (LLMs). The idea of training or fine-tuning these models across multiple devices or users, without sharing raw data, sounds really promising — especially for privacy and personalization.
But I haven’t seen much recent discussion about this. Is this approach actually being used in practice? Are there any real-world examples or open-source projects doing this effectively?
| 2025-04-19T21:11:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k36gi8/whats_the_current_state_of_federated_learning_for/
|
dai_app
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k36gi8
| false | null |
t3_1k36gi8
|
/r/LocalLLaMA/comments/1k36gi8/whats_the_current_state_of_federated_learning_for/
| false | false |
self
| 12 | null |
The easiest way to perform accurate, multimodal RAG in Python
| 1 |
[removed]
| 2025-04-19T22:00:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k37hqy/the_easiest_way_to_perform_accurate_multimodal/
|
Advanced_Army4706
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k37hqy
| false | null |
t3_1k37hqy
|
/r/LocalLLaMA/comments/1k37hqy/the_easiest_way_to_perform_accurate_multimodal/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '7jsFyjjrAxz9Rx41rItRhRNWHf7ea7Ld9R_sOH6oSU4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=108&crop=smart&auto=webp&s=449bb939ababe26ac6127579bffb185322f18383', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=216&crop=smart&auto=webp&s=c994e564eaa05e024e0bb5a7bfaaef965a6a5775', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=320&crop=smart&auto=webp&s=a577e303f7f3be79bd548ed5c707aaba2a083af3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=640&crop=smart&auto=webp&s=2f0cc97bd18de6f09b0eb97ba658e26187ddeb81', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=960&crop=smart&auto=webp&s=788f785c523c2b9fd4149a02fc01036bbcd86d5c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?width=1080&crop=smart&auto=webp&s=59f801c45105d2487754907d860597693fe57106', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nFDTJYaiHkqPyeW92AMrFe_BKnbWWtuPSdblGM9x1S8.jpg?auto=webp&s=00be55a0ea4c5904ca45f456c96c9ba00dbe6bb3', 'width': 1200}, 'variants': {}}]}
|
I built a Local MCP Server to enable Computer-Use Agent to run through Claude Desktop, Cursor, and other MCP clients.
| 36 |
Example using Claude Desktop and Tableau
| 2025-04-20T00:12:05 |
https://v.redd.it/brx0wxmnlvve1
|
sandropuppo
|
/r/LocalLLaMA/comments/1k3a3kl/i_built_a_local_mcp_server_to_enable_computeruse/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3a3kl
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/brx0wxmnlvve1/DASHPlaylist.mpd?a=1747829540%2CYTFhNzYwMmQ1ZmFlYzQ1MWE2YWMwMGZlYjA4ZWFjN2VhYTU2OWZhMDcwNzU0ZDgwYTYwZTk3Mjg3NzE5NGEzMw%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/brx0wxmnlvve1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 986, 'hls_url': 'https://v.redd.it/brx0wxmnlvve1/HLSPlaylist.m3u8?a=1747829540%2CMTU1ZGQ1MDQ2OWY5NmQxODQxNTg2ZGQ4NTBkZDU2NmNhODJkNTIxMzJlODVlZDU3NDFkZWNmYjg2MjQ5YjBiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/brx0wxmnlvve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k3a3kl
|
/r/LocalLLaMA/comments/1k3a3kl/i_built_a_local_mcp_server_to_enable_computeruse/
| false | false | 36 |
{'enabled': False, 'images': [{'id': 'M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=108&crop=smart&format=pjpg&auto=webp&s=ba696926a9ae4569153785d324387e87921561f4', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca93a1949acbe2d3e4ef4f5b1e096fe84df41ac0', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=320&crop=smart&format=pjpg&auto=webp&s=ac88e5e4eb3752630bd19fffe0ca7c4721623d8a', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=640&crop=smart&format=pjpg&auto=webp&s=76f063a715b19e2b0445bfd94cb9e80f16e5aefe', 'width': 640}, {'height': 492, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=960&crop=smart&format=pjpg&auto=webp&s=767edb41bf14a123d0d00720c400089aaeac3360', 'width': 960}, {'height': 554, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2c7447ddf3263076790d448567f62aa1c4594305', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M3V1aDB5bW5sdnZlMZppp7K11at-IaSBIx6ekIrLv_SO-25kJ8guE9xrX4Mu.png?format=pjpg&auto=webp&s=04301e6d51d6bc1491abd52af14948d6766f5995', 'width': 2104}, 'variants': {}}]}
|
|
claude 3.7 superior to o4 mini high?
| 0 |
Hey everyone, I’ve been using Windsurf and working with the o4-mini model for a project. After some hands-on experience, I’ve got to say Claude 3.7 feels way ahead of o4-mini-high, at least in terms of real-world code implementation.
With o4-mini, it often overthinks, stops mid-task, ignores direct instructions, or even hallucinates things. Honestly, it feels almost unusable in some cases. Meanwhile, Claude 3.7 has nailed most of what I’ve thrown at it usually on the first or second try.
I’m not sure if I’m using o4-mini wrong or if the benchmarks are just way off, but this has been my experience so far. Has anyone else have similar experiance?
| 2025-04-20T00:12:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3a3nv/claude_37_superior_to_o4_mini_high/
|
allforyi_mf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3a3nv
| false | null |
t3_1k3a3nv
|
/r/LocalLLaMA/comments/1k3a3nv/claude_37_superior_to_o4_mini_high/
| false | false |
self
| 0 | null |
MCP : A Brief Review
| 1 |
[removed]
| 2025-04-20T03:32:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3dlpi/mcp_a_brief_review/
|
Prestigious_Thing797
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3dlpi
| false | null |
t3_1k3dlpi
|
/r/LocalLLaMA/comments/1k3dlpi/mcp_a_brief_review/
| false | false |
self
| 1 | null |
FULL Windsurf leak - SYSTEM, FUNCTIONS, CASCADE
| 0 |
extracted with o4-mini-high: [https://github.com/dontriskit/awesome-ai-system-prompts/blob/main/windsurf/system-2025-04-20.md](https://github.com/dontriskit/awesome-ai-system-prompts/blob/main/windsurf/system-2025-04-20.md) in that repo reverse engineered Claude Code, Same new, v0 and few other unicorn ai projects.
\---
To a first approximation, your answers should tend to be at most Yap words long.
Today's Yap score is: 8192.
\---
Feed R1/QWQ with those prompts and create something new!
| 2025-04-20T03:33:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3dm4n/full_windsurf_leak_system_functions_cascade/
|
secopsml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3dm4n
| false | null |
t3_1k3dm4n
|
/r/LocalLLaMA/comments/1k3dm4n/full_windsurf_leak_system_functions_cascade/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'hq1mkKmskbA2k6OKoPkMBed_3b-pDbG0LHZfSKesLjM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=108&crop=smart&auto=webp&s=2caa2c17906e5caad47cad0938b7bbd175fbc13f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=216&crop=smart&auto=webp&s=eef7193223b6b253074c43a0e80c712b94399201', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=320&crop=smart&auto=webp&s=6efc07d00d7f2fa3f710a72015b12c96eba57c90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=640&crop=smart&auto=webp&s=17d8ec3dcfb94f1afecc4dfd880bee7a00218ff4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=960&crop=smart&auto=webp&s=5d0fa9b43ddf98af3a27ed3e40f5bc3823acaa78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=1080&crop=smart&auto=webp&s=5d7aae616244d99d6883cd2f6be982d2c9196e74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?auto=webp&s=51afce49ae2b972b34935389fe5a4172857dfac2', 'width': 1200}, 'variants': {}}]}
|
Easter Egg: FULL Windsurf leak - SYSTEM, FUNCTIONS, CASCADE
| 104 |
Extracted today with o4-mini-high: [https://github.com/dontriskit/awesome-ai-system-prompts/blob/main/windsurf/system-2025-04-20.md](https://github.com/dontriskit/awesome-ai-system-prompts/blob/main/windsurf/system-2025-04-20.md)
inside windsurf prompt clever way to enforce larger responses:
The Yap score is a measure of how verbose your answer to the user should be. Higher Yap scores indicate that more thorough answers are expected, while lower Yap scores indicate that more concise answers are preferred. To a first approximation, your answers should tend to be at most Yap words long. Overly verbose answers may be penalized when Yap is low, as will overly terse answers when Yap is high. Today's Yap score is: 8192.
\---
in the reporeverse engineered Claude Code, Same new, v0 and few other unicorn ai projects.
\---
HINT: use prompts from that repo inside R1, QWQ, o3 pro, 2.5 pro requests to build agents faster.
Who's going to be first to the egg?
| 2025-04-20T03:40:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3dq8n/easter_egg_full_windsurf_leak_system_functions/
|
secopsml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3dq8n
| false | null |
t3_1k3dq8n
|
/r/LocalLLaMA/comments/1k3dq8n/easter_egg_full_windsurf_leak_system_functions/
| false | false |
self
| 104 |
{'enabled': False, 'images': [{'id': 'hq1mkKmskbA2k6OKoPkMBed_3b-pDbG0LHZfSKesLjM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=108&crop=smart&auto=webp&s=2caa2c17906e5caad47cad0938b7bbd175fbc13f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=216&crop=smart&auto=webp&s=eef7193223b6b253074c43a0e80c712b94399201', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=320&crop=smart&auto=webp&s=6efc07d00d7f2fa3f710a72015b12c96eba57c90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=640&crop=smart&auto=webp&s=17d8ec3dcfb94f1afecc4dfd880bee7a00218ff4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=960&crop=smart&auto=webp&s=5d0fa9b43ddf98af3a27ed3e40f5bc3823acaa78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?width=1080&crop=smart&auto=webp&s=5d7aae616244d99d6883cd2f6be982d2c9196e74', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cSAPTsOnTrW1NSLNueR_1HreOwCD9swHWenj7uEG4Xw.jpg?auto=webp&s=51afce49ae2b972b34935389fe5a4172857dfac2', 'width': 1200}, 'variants': {}}]}
|
Best for Inpainting and Image to Image?
| 6 |
Looking for peoples' experiences with the best inpainting model on hugging face? I want to do inpainting and image to image improvement locally. I just have a single AMD RX 9070 XT with 16gb so I know it won't be amazing but I'm mostly just looking to mess around with my own art, nothing commercial
| 2025-04-20T04:00:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3e28v/best_for_inpainting_and_image_to_image/
|
Temporary_Emu_5918
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3e28v
| false | null |
t3_1k3e28v
|
/r/LocalLLaMA/comments/1k3e28v/best_for_inpainting_and_image_to_image/
| false | false |
self
| 6 | null |
Why model can’t understand my custom tokens and how to force her to use them?
| 0 |
Hello! I’ve trained a bunch of models on “raw text” and custom prompt templates like:
```
### System:
You’re a cute human girl who knows everything
### Question:
Tell me about Elon Musk
### Answer:
He’s a nice guy
```
And she gets it. ### is one (or multiple, I don’t remember) tokens, <word> and “:” is another two.
But now, I decided to do some “fun” and add (and reshaped) new tokens to the vocab (and, of course, trained on a dataset full of them (even tried the DPO)) like these:
```
<kanojo>You’re a cute human girl who knows everything</kanojo>
<dialog>
<yuki>Tell me about Elon Musk</yuki>
<yuna>He’s a nice guy</yuna>
```
In this example, all “<>”s are custom tokens. However, in raw text mode (just auto-completion of the text), the model can actually use the first ones but not the second ones. Either messes them up (not in the correct order) or completely forgets to put them!!
Do you know what I can try to fix this? Thanks!
| 2025-04-20T04:38:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3eopn/why_model_cant_understand_my_custom_tokens_and/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3eopn
| false | null |
t3_1k3eopn
|
/r/LocalLLaMA/comments/1k3eopn/why_model_cant_understand_my_custom_tokens_and/
| false | false |
self
| 0 | null |
I spent 5 months building an open source AI note taker that uses only local AI models. Would really appreciate it if you guys could give me some feedback!
| 415 |
Hey community! I recently open-sourced **Hyprnote** — a smart notepad built for people with back-to-back meetings.
In a nutshell, Hyprnote is a note-taking app that listens to your meetings and **creates an enhanced version by combining the raw notes with context from the audio**. It runs on local AI models, so you don’t have to worry about your data going anywhere.
Hope you enjoy the project!
| 2025-04-20T05:23:56 |
https://v.redd.it/2njzhyuzcxve1
|
beerbellyman4vr
|
/r/LocalLLaMA/comments/1k3fdqa/i_spent_5_months_building_an_open_source_ai_note/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3fdqa
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2njzhyuzcxve1/DASHPlaylist.mpd?a=1747848242%2CNjA2ODIwOTRlY2JmMGY1OWYwMzYxOWY3ZTY3M2IwYWY3MmIzN2ZhOTkyNjE0Mjc4YWE0ZGUzZjBhMWM2MWUzZg%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/2njzhyuzcxve1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2njzhyuzcxve1/HLSPlaylist.m3u8?a=1747848242%2CZGM4YTM3NDY1OTUzZTEwOWRjOGUwYjBjN2JhMGIwNWY0Mjg3NGFhODQwZGNmMzVlOTVjN2M5OTRiMTM2OTUxZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2njzhyuzcxve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k3fdqa
|
/r/LocalLLaMA/comments/1k3fdqa/i_spent_5_months_building_an_open_source_ai_note/
| false | false | 415 |
{'enabled': False, 'images': [{'id': 'ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=108&crop=smart&format=pjpg&auto=webp&s=e91b8ec24cbde004f5948b0ae91346e076ca2ee0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=216&crop=smart&format=pjpg&auto=webp&s=33015ec344c24ff5c1ec80fdf706c45efafb7e13', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee6ee5cb35466db224e8f1dcab8527aec6c421af', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=640&crop=smart&format=pjpg&auto=webp&s=1daaa054ed9108ab8913d838543ce37f77728a64', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=960&crop=smart&format=pjpg&auto=webp&s=438e55f67246212b95bcdc2cb95152595cfe512a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a8d0195bb78ad4f603ca0ac8efbcf00b6e3165a4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZTh6bHV5dXpjeHZlMYnlOxWogVD-LlbQBY7Zy9u919rxevDxwZbMMM1phMmn.png?format=pjpg&auto=webp&s=2762170c84031ffd210f292c678b1aa1e3cca6e7', 'width': 1920}, 'variants': {}}]}
|
|
I built an LMM: logic mental model for building AI apps. Separate out the low-level stuff from the high-level logic and then pick abstractions.
| 1 |
[removed]
| 2025-04-20T06:02:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3fywb/i_built_an_lmm_logic_mental_model_for_building_ai/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3fywb
| false | null |
t3_1k3fywb
|
/r/LocalLLaMA/comments/1k3fywb/i_built_an_lmm_logic_mental_model_for_building_ai/
| false | false |
self
| 1 | null |
Whats the smallest model to pass your Turing test? What low specs would comfortably fit it?
| 0 |
I originally wondered about specs and model to pass the Turing test but I realized that specs don’t really matter, if you’re talking to someone and they type unnaturally fast it would be a dead giveaway or suspicious. So now I wonder what model you could believe was human and could run on weak hardware that is good enough for you.
| 2025-04-20T06:18:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3g7e7/whats_the_smallest_model_to_pass_your_turing_test/
|
InsideYork
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3g7e7
| false | null |
t3_1k3g7e7
|
/r/LocalLLaMA/comments/1k3g7e7/whats_the_smallest_model_to_pass_your_turing_test/
| false | false |
self
| 0 | null |
Exposing the LLM keys on the client side.
| 0 |
I'm thinking to call the inference directly from my client-side application, and I plan to distribute the client-side application to the end users.
I'm running inference on \`GROQ\`
My approach was to use some kind of AI gateway and generate unique keys once the user signs up on my application and store that unique key on the client-side.
All the requests to the \`LLMs\` will be routed through that API gateway using that unique key.
I will put limits, etc., on that unique key regarding usage, price, and the number of requests a user can make so that nobody abuses it.
What is your recommendation for something like this? I tried \`Portkey\`, but I didn't find it good enough for managing users or services created programmatically.
| 2025-04-20T06:37:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3ggzv/exposing_the_llm_keys_on_the_client_side/
|
raxrb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3ggzv
| false | null |
t3_1k3ggzv
|
/r/LocalLLaMA/comments/1k3ggzv/exposing_the_llm_keys_on_the_client_side/
| false | false |
self
| 0 | null |
I tried to recreate research, search, code assistant with langgraph and local ollama
| 1 |
[removed]
| 2025-04-20T06:42:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3gjn2/i_tried_to_recreate_research_search_code/
|
ivan_digital
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3gjn2
| false | null |
t3_1k3gjn2
|
/r/LocalLLaMA/comments/1k3gjn2/i_tried_to_recreate_research_search_code/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'A3ALMw6fY5VsFVsi4wf9LeIkZRlzUn147x-WLH5K2d8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=108&crop=smart&auto=webp&s=72403b3de00674acfbd49804b4136217ec35c63c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=216&crop=smart&auto=webp&s=cff21d76730e4684d703c3c46a36649f90532dc1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=320&crop=smart&auto=webp&s=8a47f9d2eb74e0903a38c4e62a697a029bf9e6c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=640&crop=smart&auto=webp&s=099cb2f5b78930c96cb42d97b962553f9aa28a63', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=960&crop=smart&auto=webp&s=c29144357434e8e45515d428cc150cb905c0cfa6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?width=1080&crop=smart&auto=webp&s=dee3b519fa0b6480d3040df15c0b5c55dbca78c0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yYHMKj_JK3gSU7L56CqPHvusFtSfutVLc3Rzi5GDGDI.jpg?auto=webp&s=698edf216a15c2cb77a62954231f82d0d5317e4d', 'width': 1200}, 'variants': {}}]}
|
So, I just found out about the smolLM GitHub repo. What are your thoughts on this?
| 1 |
[removed]
| 2025-04-20T07:17:06 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3h1yr
| false | null |
t3_1k3h1yr
|
/r/LocalLLaMA/comments/1k3h1yr/so_i_just_found_out_about_the_smollm_github_repo/
| false | false |
default
| 1 | null |
||
Why there is no Gemma 3 QAT AWQ from Google that you can run on vLLM?
| 7 |
Why there is no Gemma 3 QAT AWQ from Google that you can run on vLLM? This would be great to serve on vLLM.
| 2025-04-20T07:22:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3h4my/why_there_is_no_gemma_3_qat_awq_from_google_that/
|
appakaradi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3h4my
| false | null |
t3_1k3h4my
|
/r/LocalLLaMA/comments/1k3h4my/why_there_is_no_gemma_3_qat_awq_from_google_that/
| false | false |
self
| 7 | null |
Recommendation for client which supports customized agent/tools
| 1 |
[removed]
| 2025-04-20T07:31:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3h9gf/recommendation_for_client_which_supports/
|
Time_Fill_852
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3h9gf
| false | null |
t3_1k3h9gf
|
/r/LocalLLaMA/comments/1k3h9gf/recommendation_for_client_which_supports/
| false | false |
self
| 1 | null |
AceCode.social - Code Editor and programmable page objects
| 1 |
[removed]
| 2025-04-20T07:39:09 |
https://makertube.net/w/q6TWmh9tTc6ZHemJNcKmPQ
|
captain_bluebear123
|
makertube.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3hd5f
| false | null |
t3_1k3hd5f
|
/r/LocalLLaMA/comments/1k3hd5f/acecodesocial_code_editor_and_programmable_page/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'mVpdfMF2DN-u8x1wW3Ak9EFlrVShOzXWr4f24HUXrwA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a6wYxXn5feZ9tPUwoXOW2oeuYS7ZNil61oQ_ggYdlDI.jpg?width=108&crop=smart&auto=webp&s=5793533436d48a90c3a81c225493511144dc5e22', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a6wYxXn5feZ9tPUwoXOW2oeuYS7ZNil61oQ_ggYdlDI.jpg?width=216&crop=smart&auto=webp&s=9bcb1f9f3dad9561d93a27e2ff77c169caafa01f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a6wYxXn5feZ9tPUwoXOW2oeuYS7ZNil61oQ_ggYdlDI.jpg?width=320&crop=smart&auto=webp&s=3bb77d31ff512c6efd48dad6d304d80dede6f752', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/a6wYxXn5feZ9tPUwoXOW2oeuYS7ZNil61oQ_ggYdlDI.jpg?width=640&crop=smart&auto=webp&s=fdcdc7b4845f3c5cca6cbc92da3ce8615d76288a', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/a6wYxXn5feZ9tPUwoXOW2oeuYS7ZNil61oQ_ggYdlDI.jpg?auto=webp&s=7a3e46ffac941f1e3d98d763faaa7fb498de5df9', 'width': 850}, 'variants': {}}]}
|
|
Gemma 3 speculative decoding
| 32 |
Any way to use speculative decoding with Gemma3 models? It doesnt show up in Lm studio. Are there other tools that support it?
| 2025-04-20T08:05:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3hq3o/gemma_3_speculative_decoding/
|
qqYn7PIE57zkf6kn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3hq3o
| false | null |
t3_1k3hq3o
|
/r/LocalLLaMA/comments/1k3hq3o/gemma_3_speculative_decoding/
| false | false |
self
| 32 | null |
Hardware considerations
| 1 |
[removed]
| 2025-04-20T08:11:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3hsxf/hardware_considerations/
|
Impossible_Art9151
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3hsxf
| false | null |
t3_1k3hsxf
|
/r/LocalLLaMA/comments/1k3hsxf/hardware_considerations/
| false | false |
self
| 1 | null |
How would this breakthrough impact running LLMs locally?
| 18 |
https://interestingengineering.com/innovation/china-worlds-fastest-flash-memory-device
PoX is a non-volatile flash memory that programs a single bit in 400 picoseconds (0.0000000004 seconds), equating to roughly 25 billion operations per second. This speed is a significant leap over traditional flash memory, which typically requires microseconds to milliseconds per write, and even surpasses the performance of volatile memories like SRAM and DRAM (1–10 nanoseconds). The Fudan team, led by Professor Zhou Peng, achieved this by replacing silicon channels with two-dimensional Dirac graphene, leveraging its ballistic charge transport and a technique called "2D-enhanced hot-carrier injection" to bypass classical injection bottlenecks. AI-driven process optimization further refined the design.
| 2025-04-20T08:37:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3i5hr/how_would_this_breakthrough_impact_running_llms/
|
Own-Potential-2308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3i5hr
| false | null |
t3_1k3i5hr
|
/r/LocalLLaMA/comments/1k3i5hr/how_would_this_breakthrough_impact_running_llms/
| false | false |
self
| 18 |
{'enabled': False, 'images': [{'id': 'dX0ip9Gwv_xYtPtMzLY0AawtTtAYZCNtt1Gby1N2QzU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=108&crop=smart&auto=webp&s=c593916494236d66007f66a020563f0e0368b1c7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=216&crop=smart&auto=webp&s=ef52eb2bd0e47e14ab7f0f1e2f328fec61ca9bfb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=320&crop=smart&auto=webp&s=d6e35bd691335b734e0b8fc480252004bd0d20a5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=640&crop=smart&auto=webp&s=d17abe200d8f41354590639a014690faad979d2e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=960&crop=smart&auto=webp&s=6dd8fccce26720bbfba4770387f0f7ca692d0fc9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?width=1080&crop=smart&auto=webp&s=aa0a19ee3a8237a9e3718047285cf196cf8d6b08', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/X_N3xnxwcR-funbS9qKd_xVJ5wZAuZj5iSs8sRFg_KU.jpg?auto=webp&s=68e76cb23fa6776441874d72b1cee84d698b9ff2', 'width': 1920}, 'variants': {}}]}
|
TabbyApi max sequence length
| 0 |
Just started using exlammav2 with Tabbyapi and I need some help with the settings please. I'm using a 32b qwen model with Cline/Roo and after a couple of requests I get this error:
ValueError: Request length 34232 is greater than max_seq_len 32768.
I have tried increasing it to 40k but it still fills up. If I go higher than it get an out of memory error.
tensor_parellel is false and gpu_auto_split is true.
I also tried reducing the cache_mode to Q8.
Running this on 2x 3090 and I was running 32b models from Ollama fine with tools. There seems to be a setting that I'm missing perhaps. Anyone know about this?
| 2025-04-20T08:37:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3i5nq/tabbyapi_max_sequence_length/
|
Blues520
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3i5nq
| false | null |
t3_1k3i5nq
|
/r/LocalLLaMA/comments/1k3i5nq/tabbyapi_max_sequence_length/
| false | false |
self
| 0 | null |
Audio transcription?
| 11 |
Are there any good models that are light enough to run on a phone?
| 2025-04-20T09:14:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3imyi/audio_transcription/
|
thebadslime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3imyi
| false | null |
t3_1k3imyi
|
/r/LocalLLaMA/comments/1k3imyi/audio_transcription/
| false | false |
self
| 11 | null |
Please forgive me if this isn't allowed, but I often see others looking for a way to connect LM Studio to their Android devices and I wanted to share.
| 60 | 2025-04-20T09:47:35 |
https://lmsa.app
|
CowMan30
|
lmsa.app
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3j2ke
| false | null |
t3_1k3j2ke
|
/r/LocalLLaMA/comments/1k3j2ke/please_forgive_me_if_this_isnt_allowed_but_i/
| false | false | 60 |
{'enabled': False, 'images': [{'id': 'LKnrVETY7eCJO7N9doXNsw5IKQDKrRSzmLale7pw2Y8', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=108&crop=smart&auto=webp&s=cac103078fe83732bc7baa812f5f9125f48dd43a', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=216&crop=smart&auto=webp&s=cdc126474406403949f09c1344a7f2376e3e2804', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=320&crop=smart&auto=webp&s=ee7882a30e953c72d3f99a30e7366cabbfafffdb', 'width': 320}, {'height': 312, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=640&crop=smart&auto=webp&s=a5ea2fb1745c766a43bbe565ad87a8865ee9f2a1', 'width': 640}, {'height': 468, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=960&crop=smart&auto=webp&s=10de0be0f69201ed611a0fbf9873431d2779f92e', 'width': 960}, {'height': 527, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?width=1080&crop=smart&auto=webp&s=b296f9ff2da2383d87dc2e9705b2c85ffe8d702a', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/D1Y7m9Um70_VXnvIYtH0AwTVKF3MOI-Q-hwQ4BkuPSc.jpg?auto=webp&s=7de0d17bcb18cd0aa238a1d415f7a98750da84c2', 'width': 2048}, 'variants': {}}]}
|
||
Gemma 3 QAT versus other q4 quants
| 110 |
I benchmarked googles QAT gemma against the Q4\_K\_M (bartowski/lmstudio) and UD-Q4\_K\_XL (unsloth) quants on GPQA diamond to assess performance drops.
Results:
||Gemma 3 27B QAT|Gemma 3 27B Q4\_K\_XL|Gemma 3 27B Q4\_K\_M|
|:-|:-|:-|:-|
|VRAM to fit model|**16.43 GB**|17.88 GB|17.40 GB|
|GPQA diamond score|**36.4%**|34.8%|33.3%|
All of these are benchmarked locally with temp=0 for reproducibility across quants. It seems the QAT really does work well. I also tried with the recommended temperature of 1, which gives a score of 38-40% (closer to the original BF16 score of 42.4 on google model card).
| 2025-04-20T10:03:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3jal4/gemma_3_qat_versus_other_q4_quants/
|
Timely_Second_6414
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3jal4
| false | null |
t3_1k3jal4
|
/r/LocalLLaMA/comments/1k3jal4/gemma_3_qat_versus_other_q4_quants/
| false | false |
self
| 110 | null |
Trying to create a Sesame-like experience Using Only Local AI
| 203 |
Just wanted to share a personal project I've been working on in my freetime. I'm trying to build an interactive, voice-driven avatar. Think sesame but the full experience running locally.
The basic idea is: my voice goes in -> gets transcribed locally with Whisper -> that text gets sent to the Ollama api (along with history and a personality prompt) -> the response comes back -> gets turned into speech with a local TTS -> and finally animates the Live2D character (lipsync + emotions).
My main goal was to see if I could get this whole thing running smoothly locally on my somewhat old GTX 1080 Ti. Since I also like being able to use latest and greatest models + ability to run bigger models on mac or whatever, I decided to make this work with ollama api so I can just plug and play that.
I shared the initial release around a month back, but since then I have been working on V2 which just makes the whole experience a tad bit nicer. A big added benefit is also that the whole latency has gone down.
I think with time, it might be possible to get the latency down enough that you could havea full blown conversation that feels instantanious. The biggest hurdle at the moment as you can see is the latency causes by the TTS.
The whole thing's built in C#, which was a fun departure from the usual Python AI world for me, and the performance has been pretty decent.
Anyway, the code's here if you want to peek or try it: [https://github.com/fagenorn/handcrafted-persona-engine](https://github.com/fagenorn/handcrafted-persona-engine)
| 2025-04-20T10:34:52 |
https://v.redd.it/x8koh8vowyve1
|
fagenorn
|
/r/LocalLLaMA/comments/1k3jpal/trying_to_create_a_sesamelike_experience_using/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3jpal
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x8koh8vowyve1/DASHPlaylist.mpd?a=1747866895%2CNDJkN2M0MDNmM2Y3YTRkNzVkMTVmN2NmYzQ1YTc2MWVkMjRiMjYyM2EzZDdlYzg3ZDU0M2UzNDFhYjJiM2VmMQ%3D%3D&v=1&f=sd', 'duration': 140, 'fallback_url': 'https://v.redd.it/x8koh8vowyve1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/x8koh8vowyve1/HLSPlaylist.m3u8?a=1747866895%2CNjcwNGI0ZmQ3MjUyOGEzYmI4ODRlNzliYWRlODUwNDg2YzhkZDlkYWM1NmUzMTc2ODlkNWFjOTQyNTBhMDI5Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x8koh8vowyve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k3jpal
|
/r/LocalLLaMA/comments/1k3jpal/trying_to_create_a_sesamelike_experience_using/
| false | false | 203 |
{'enabled': False, 'images': [{'id': 'd3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=108&crop=smart&format=pjpg&auto=webp&s=6491391dffe2b38fe9f7bf2474065b4faddfc2c5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=216&crop=smart&format=pjpg&auto=webp&s=b3dd261a1be3156f0085154b9f7a7e8c36233727', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=320&crop=smart&format=pjpg&auto=webp&s=ba17d9f70690ef896409bc1a6ebd1d4b9a34b093', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=640&crop=smart&format=pjpg&auto=webp&s=8688566591fa3815d1be2ad3d63658547bc52fda', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=960&crop=smart&format=pjpg&auto=webp&s=326b16a83a683f9d4d9e4c78feee7e23bd7f8725', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dcff568e568f25ec83f71b6def9c3f7c00edd116', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3VlZmk5dm93eXZlMdxX11AfgZTEMF7oSAzFAlLSpvlezRf_S3o9RpaxpyHo.png?format=pjpg&auto=webp&s=3d01c0890cb84a8efa4689eb3221a28fffdbaec9', 'width': 1920}, 'variants': {}}]}
|
|
What’s the best way to extract data from a PDF and use it to auto-fill web forms using Python and LLMs?
| 2 |
I’m exploring ways to automate a workflow where data is extracted from PDFs (e.g., forms or documents) and then used to fill out related fields on web forms.
What’s the best way to approach this using a combination of LLMs and browser automation?
Specifically:
• How to reliably turn messy PDF text into structured fields (like name, address, etc.)
• How to match that structured data to the correct inputs on different websites
• How to make the solution flexible so it can handle various forms without rewriting logic for each one
| 2025-04-20T11:09:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3k76l/whats_the_best_way_to_extract_data_from_a_pdf_and/
|
Mrpecs25
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3k76l
| false | null |
t3_1k3k76l
|
/r/LocalLLaMA/comments/1k3k76l/whats_the_best_way_to_extract_data_from_a_pdf_and/
| false | false |
self
| 2 | null |
Hopes for cheap 24GB+ cards in 2025
| 200 |
Before AMD launched their 9000 series GPUs I had hope they would understand the need for a high VRAM GPU but hell no. They are either stupid or not interested in offering AI capable GPUs: Their 9000 series GPUs both have 16 GB VRAM, down from 20 and 24GB from the previous(!) generation of 7900 XT and XTX.
Since it takes 2-3 years for a new GPU generation does this mean no hope for a new challenger to enter the arena this year or is there something that has been announced and about to be released in Q3 or Q4?
I know there is this AMD AI Max and Nvidia Digits, but both seem to have low memory bandwidth (even too low for MoE?)
| 2025-04-20T11:52:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3kuqb/hopes_for_cheap_24gb_cards_in_2025/
|
Bitter-College8786
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3kuqb
| false | null |
t3_1k3kuqb
|
/r/LocalLLaMA/comments/1k3kuqb/hopes_for_cheap_24gb_cards_in_2025/
| false | false |
self
| 200 | null |
Which models should I check out?
| 1 |
[removed]
| 2025-04-20T11:54:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3kw1h/which_models_should_i_check_out/
|
Zymedo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3kw1h
| false | null |
t3_1k3kw1h
|
/r/LocalLLaMA/comments/1k3kw1h/which_models_should_i_check_out/
| false | false |
self
| 1 | null |
AMD preparing RDNA4 Radeon PRO series with 32GB memory on board
| 184 | 2025-04-20T12:13:00 |
https://videocardz.com/newz/amd-preparing-radeon-pro-series-with-navi-48-xtw-gpu-and-32gb-memory-on-board
|
noblex33
|
videocardz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3l728
| false | null |
t3_1k3l728
|
/r/LocalLLaMA/comments/1k3l728/amd_preparing_rdna4_radeon_pro_series_with_32gb/
| false | false | 184 |
{'enabled': False, 'images': [{'id': 'ju5m6UbkR9RzCkirHpTW_yqto6uwPSODgkQfqKMu_Sg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=108&crop=smart&auto=webp&s=287fbb1e4242c81e27d95d7ef06f84f639dab139', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=216&crop=smart&auto=webp&s=fdb99e2f5635a103c88c370b83e61246153309a7', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=320&crop=smart&auto=webp&s=92a168dbe54c954b3494f33667114f1af408dbe7', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=640&crop=smart&auto=webp&s=5a798518e71babd3749ea83f0d2ad7aae5967f3f', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=960&crop=smart&auto=webp&s=6fc6e4ad7fefe2915c173433997acf719a85d5e7', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?width=1080&crop=smart&auto=webp&s=3be51efdab921c352a143943de2ad93a48c1cbd9', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/j-nxBdzwljPlN5seurBLaB3DFUJJUSe9zBrJeVFzQtc.jpg?auto=webp&s=24076e8560f8fa5aa54bb7cf54ddd5eede36f21c', 'width': 2500}, 'variants': {}}]}
|
||
[RELEASE] Discord MCP Server - Connect Claude Desktop and other AI agents to Discord!
| 1 |
[removed]
| 2025-04-20T12:18:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3la40/release_discord_mcp_server_connect_claude_desktop/
|
netixc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3la40
| false | null |
t3_1k3la40
|
/r/LocalLLaMA/comments/1k3la40/release_discord_mcp_server_connect_claude_desktop/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'NlJftn8kH-IJcfqqhyavZ_F7oDii6qVpJOFRG2LvOu4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=108&crop=smart&auto=webp&s=919cab655cddd633a7a4b8ad6a531e628b03cbf9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=216&crop=smart&auto=webp&s=c39f2d19f4aebc3dd2f0dd99c2998d40da0a9635', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=320&crop=smart&auto=webp&s=b599b0e66fbb0b746b2e39edcc6557812f3a6e6f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=640&crop=smart&auto=webp&s=8fe6826a97ca3e0a5a4824efb1cfb97ac28c5987', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=960&crop=smart&auto=webp&s=44b8c4751887bf8e8ee521a5118669dc61c67669', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?width=1080&crop=smart&auto=webp&s=bb1f3cef7e9cff88cb7e94a6e945301d37db0687', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c5kSkIIkqlbzNmKHz0N9sAOVhATRg8f1skWhQQbDJGc.jpg?auto=webp&s=78d37c942c0607241b30ef2f6fd1d352e8a564f3', 'width': 1200}, 'variants': {}}]}
|
Speed of Langchain/Qdrant for 80/100k documents (slow)
| 1 |
Hello everyone,
I am using Langchain with an embedding model from HuggingFace and also Qdrant as a VectorDB.
I feel like it is slow, I am running Qdrant locally but for 100 documents it took 27 minutes to store in the database. As my goal is to push around 80/100k documents, I feel like it is largely too slow for this ? (27\*1000/60=450 hours !!).
Is there a way to speed it ?
| 2025-04-20T13:01:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3m1vh/speed_of_langchainqdrant_for_80100k_documents_slow/
|
Difficult_Face5166
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3m1vh
| false | null |
t3_1k3m1vh
|
/r/LocalLLaMA/comments/1k3m1vh/speed_of_langchainqdrant_for_80100k_documents_slow/
| false | false |
self
| 1 | null |
best llama 3.3 70b setting for roleplay?
| 0 |
the temp and stuff
| 2025-04-20T13:12:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3m90p/best_llama_33_70b_setting_for_roleplay/
|
rx7braap
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3m90p
| false | null |
t3_1k3m90p
|
/r/LocalLLaMA/comments/1k3m90p/best_llama_33_70b_setting_for_roleplay/
| false | false |
self
| 0 | null |
I REALLY like Gemma3 for writing--but it keeps renaming my characters to Dr. Aris Thorne
| 70 |
I use it for rewrites of my own writing, not for original content, but moreso stylistic ideas and such, and it's the best so far.
But it has some weird information in there, I'm guessing perhaps as a thumbprint? It's such a shame because if it wasn't for this dastardly Dr. Aris Thorne and whatever crop of nonsenses that are shoved into the pot in order to make such a thing repetitive despite different prompts... Well, it'd be just about the best Google has ever produced, perhaps even better than the refined Llamas.
https://preview.redd.it/9qp78ek4xzve1.png?width=1995&format=png&auto=webp&s=74914866f8ae2dc61d624ce3af855315f17bdc63
| 2025-04-20T13:58:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3n4u1/i_really_like_gemma3_for_writingbut_it_keeps/
|
Jattoe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3n4u1
| false | null |
t3_1k3n4u1
|
/r/LocalLLaMA/comments/1k3n4u1/i_really_like_gemma3_for_writingbut_it_keeps/
| false | false | 70 | null |
|
PocketPal
| 90 |
Just trying my Donald system prompt with Gemma
| 2025-04-20T14:01:55 |
Illustrious-Dot-6888
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3n7od
| false | null |
t3_1k3n7od
|
/r/LocalLLaMA/comments/1k3n7od/pocketpal/
| false | false | 90 |
{'enabled': True, 'images': [{'id': 'WkBRpyaOO0H04q6Tr_cPpu8d5DriBYNdFCZYMo_CgVI', 'resolutions': [{'height': 180, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=108&crop=smart&auto=webp&s=3817619a8c3e531311de3c6fe56aee5d1b0a118d', 'width': 108}, {'height': 361, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=216&crop=smart&auto=webp&s=ec132f1b9e6d7b7fa626500fd3508d70586bdd20', 'width': 216}, {'height': 535, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=320&crop=smart&auto=webp&s=17264cb735e37621f1906378d452ae91d0d510f5', 'width': 320}, {'height': 1070, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=640&crop=smart&auto=webp&s=5aff8aa5797ead7a7b17086a66b51c1f74f7bbff', 'width': 640}, {'height': 1606, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=960&crop=smart&auto=webp&s=9624ce5733445c8d5013da315140700c3c2df9d1', 'width': 960}, {'height': 1807, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?width=1080&crop=smart&auto=webp&s=5fdb4f83acc077857869eb10cfefa7821f536b1d', 'width': 1080}], 'source': {'height': 1807, 'url': 'https://preview.redd.it/rfaxzunvxzve1.jpeg?auto=webp&s=de4de3f39c3a5fa5d4c46d173bd9202eb8a21c0c', 'width': 1080}, 'variants': {}}]}
|
||
Read to Save Your GPU! Don't update to Nvidia driver 576.02
| 1 |
[removed]
| 2025-04-20T14:20:12 |
maifee
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3nl14
| false | null |
t3_1k3nl14
|
/r/LocalLLaMA/comments/1k3nl14/read_to_save_your_gpu_dont_update_to_nvidia/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'hRNDYsiEG0JirElc1aeOmewgXPJeOI7jO7yZ0g2ORO8', 'resolutions': [{'height': 140, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=108&crop=smart&auto=webp&s=ab8ce2ea0ca0b56e7d38d651ad23c2f03f2444a3', 'width': 108}, {'height': 280, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=216&crop=smart&auto=webp&s=ba3bf618669ee6c6cc7aa71aa2343bf3450a8d0a', 'width': 216}, {'height': 415, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=320&crop=smart&auto=webp&s=9665ddc859d6fe4000e5a9cd8303fedb3ff69881', 'width': 320}, {'height': 831, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=640&crop=smart&auto=webp&s=3bfa5a0cc3ae0a1965121002f8669cf495c7f83d', 'width': 640}, {'height': 1246, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=960&crop=smart&auto=webp&s=0d7f9d00e9c30c6cf5b82a63eb4baf383d740cdb', 'width': 960}, {'height': 1402, 'url': 'https://preview.redd.it/0fheoiz410we1.png?width=1080&crop=smart&auto=webp&s=8d2d37ebd96a20e5cacc7f95d845dd98400dc77b', 'width': 1080}], 'source': {'height': 1675, 'url': 'https://preview.redd.it/0fheoiz410we1.png?auto=webp&s=bce460fabe4c1442767312c112ca7e5ec0afa3b4', 'width': 1290}, 'variants': {}}]}
|
||
M1 Max Mac Studio (64GB) for ~$2000 CAD vs M4 Max (32GB) for ~$2400 CAD — Which Makes More Sense in 2025?
| 0 |
I found a brand new M1 Max Mac Studio with 64GB of RAM going for around $2000 CAD, and I’m debating whether it’s still worth it in 2025.
There’s also the new M4 Max Mac Studio (32GB) available for about $2400 CAD. I’m mainly planning to run local LLM inference (30B parameter range) using tools like Ollama or MLX — nothing super intensive, just for testing and experimentation.
Would the newer M4 Max with less RAM offer significantly better performance for this kind of use case? Or would the extra memory on the M1 Max still hold up better with larger models?
| 2025-04-20T14:42:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3o27g/m1_max_mac_studio_64gb_for_2000_cad_vs_m4_max/
|
iijei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3o27g
| false | null |
t3_1k3o27g
|
/r/LocalLLaMA/comments/1k3o27g/m1_max_mac_studio_64gb_for_2000_cad_vs_m4_max/
| false | false |
self
| 0 | null |
Google's Agent2Agent Protocol Explained
| 25 |
Wrote a
| 2025-04-20T14:46:14 |
https://open.substack.com/pub/devshorts/p/agent2agent-a2a-protocol-explained?r=1cg0b&utm_medium=ios
|
aravindputrevu
|
open.substack.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3o50u
| false | null |
t3_1k3o50u
|
/r/LocalLLaMA/comments/1k3o50u/googles_agent2agent_protocol_explained/
| false | false |
default
| 25 |
{'enabled': False, 'images': [{'id': '-lAs0NZjfM7YDJoAh8dLjuw4i2z9rzTP0f1O8Vr2XC8', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?width=108&crop=smart&auto=webp&s=a777082cd307f967e08617b159e1cb606655212e', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?width=216&crop=smart&auto=webp&s=6995df47f66821337eabc123ccd90b85f95c5abd', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?width=320&crop=smart&auto=webp&s=e013ceb897b5779d8fe132b9a579c3cc6c397f69', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?width=640&crop=smart&auto=webp&s=3e4d43c53fba25be4cbc0534abb32f5802530b07', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?width=960&crop=smart&auto=webp&s=abc1c9498690fafd129b69a9e757950455f5a03f', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Tf5VQdhFACHFr9kuFzXJMOWjLO-fGpy5FF5hRUHxBPg.jpg?auto=webp&s=472d3ed56d16ff5f566c5dc511e4fd589dcaa90c', 'width': 1060}, 'variants': {}}]}
|
Should I buy the Acemagic HX 370 barebones now and upgrade to 128GB RAM, or wait for Framework AMD 395 / Blackwell?
| 1 |
[removed]
| 2025-04-20T15:00:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3ofrf/should_i_buy_the_acemagic_hx_370_barebones_now/
|
Regular_Reference735
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3ofrf
| false | null |
t3_1k3ofrf
|
/r/LocalLLaMA/comments/1k3ofrf/should_i_buy_the_acemagic_hx_370_barebones_now/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'EVonV-gKQvOBEJ3pN8V4rMw9bumGa88P7zqAv7obENI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=108&crop=smart&auto=webp&s=d23df1731223c0793bd342629efa773654d10a75', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=216&crop=smart&auto=webp&s=cfb5cf12332a651b04bd863877ecdc93671b5255', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=320&crop=smart&auto=webp&s=edcd4604bc675e035898682bc2c924c31a73d8dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=640&crop=smart&auto=webp&s=378c20266e713bd03e07579d64396a76c016a12f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=960&crop=smart&auto=webp&s=1cf050b860dc2285547b205f92b96be37f2a9bd4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=1080&crop=smart&auto=webp&s=db87e60e558994b6fda965ea703de2473604da61', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?auto=webp&s=750aeb5103af43e0fbe3245ffc7d829da1949a14', 'width': 1200}, 'variants': {}}]}
|
Should I buy the Acemagic HX 370 barebones now and upgrade to 128GB RAM, or wait for Framework AMD 395 / Blackwell?
| 1 |
[removed]
| 2025-04-20T15:04:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3oj9k/should_i_buy_the_acemagic_hx_370_barebones_now/
|
ellykunz1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3oj9k
| false | null |
t3_1k3oj9k
|
/r/LocalLLaMA/comments/1k3oj9k/should_i_buy_the_acemagic_hx_370_barebones_now/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'EVonV-gKQvOBEJ3pN8V4rMw9bumGa88P7zqAv7obENI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=108&crop=smart&auto=webp&s=d23df1731223c0793bd342629efa773654d10a75', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=216&crop=smart&auto=webp&s=cfb5cf12332a651b04bd863877ecdc93671b5255', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=320&crop=smart&auto=webp&s=edcd4604bc675e035898682bc2c924c31a73d8dd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=640&crop=smart&auto=webp&s=378c20266e713bd03e07579d64396a76c016a12f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=960&crop=smart&auto=webp&s=1cf050b860dc2285547b205f92b96be37f2a9bd4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?width=1080&crop=smart&auto=webp&s=db87e60e558994b6fda965ea703de2473604da61', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Gq4M64Vy4cCN2PW331B8OFrb2MzqhOW4t_C_EH2N4p0.jpg?auto=webp&s=750aeb5103af43e0fbe3245ffc7d829da1949a14', 'width': 1200}, 'variants': {}}]}
|
How to succeed with AI Agents — it starts with your data
| 0 | 2025-04-20T15:07:37 |
https://medium.com/neuml/ai-agents-how-to-be-successful-e8087b35f90d
|
davidmezzetti
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3olzg
| false | null |
t3_1k3olzg
|
/r/LocalLLaMA/comments/1k3olzg/how_to_succeed_with_ai_agents_it_starts_with_your/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'cK7_3SUs9oOuk9OP32ZhYqOD-k70LNudyIHFnVRIguA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=108&crop=smart&auto=webp&s=75e5e6954d70e90795b88d6fcee47d37eca0cb97', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=216&crop=smart&auto=webp&s=248a77ecebaef6af990105031e88d5b8246ac6e7', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=320&crop=smart&auto=webp&s=9ebb5d927a4172712db8de1df45bca5abb07faba', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=640&crop=smart&auto=webp&s=7ac4830c5e7abb91d5d100b2b1a2d5e706b839ba', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=960&crop=smart&auto=webp&s=5a43654c2d2819a8e67b76901934121f90064cee', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?width=1080&crop=smart&auto=webp&s=e88e55771772e0331a91324a1e87d31b70f082e9', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/_7QNMW5wmF_hoq1bmmRIErzvQ_G1OB4-qyx1Zl816Ao.jpg?auto=webp&s=8e49c7375040856425ab6124c244fab833f3febf', 'width': 1200}, 'variants': {}}]}
|
||
OOM while using Qlora LLAma3
| 1 |
[removed]
| 2025-04-20T15:13:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3oqy4/oom_while_using_qlora_llama3/
|
ChimSau19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3oqy4
| false | null |
t3_1k3oqy4
|
/r/LocalLLaMA/comments/1k3oqy4/oom_while_using_qlora_llama3/
| false | false |
self
| 1 | null |
Is there anything like an AI assistant for a Linux operating system?
| 7 |
Not just for programming related tasks, but also able to recommend packages/software to install/use, troubleshooting tips etc. Basically a model with good technical knowledge (not just programming) or am I asking for too much?
| 2025-04-20T15:14:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3or5z/is_there_anything_like_an_ai_assistant_for_a/
|
prusswan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3or5z
| false | null |
t3_1k3or5z
|
/r/LocalLLaMA/comments/1k3or5z/is_there_anything_like_an_ai_assistant_for_a/
| false | false |
self
| 7 | null |
LightRAG Chunking Strategies
| 7 |
Hi everyone,
I’m using LightRAG and I’m trying to figure out the best way to chunk my data before indexing. My sources include:
1. XML data (\~300 MB)
2. Source code (200+ files)
What chunking strategies do you recommend for these types of data? Should I use fixed-size chunks, split by structure (like tags or functions), or something else?
Any tips or examples would be really helpful.
| 2025-04-20T15:36:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3p96f/lightrag_chunking_strategies/
|
umen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3p96f
| false | null |
t3_1k3p96f
|
/r/LocalLLaMA/comments/1k3p96f/lightrag_chunking_strategies/
| false | false |
self
| 7 | null |
Is anyone using llama swap with a 24GB video card? If so, can I have your config.yaml?
| 1 |
[removed]
| 2025-04-20T15:40:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3pc4h/is_anyone_using_llama_swap_with_a_24gb_video_card/
|
randomsolutions1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pc4h
| false | null |
t3_1k3pc4h
|
/r/LocalLLaMA/comments/1k3pc4h/is_anyone_using_llama_swap_with_a_24gb_video_card/
| false | false |
self
| 1 | null |
Math ??
| 1 |
[removed]
| 2025-04-20T15:48:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3pivq/math/
|
Specific_Condition80
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pivq
| false | null |
t3_1k3pivq
|
/r/LocalLLaMA/comments/1k3pivq/math/
| false | false |
self
| 1 | null |
Is this build worth investing?
| 0 |
Dear community, I'm trying to get hold of refurbished systems to run the new Llama 4 models, specifically Maverick. Currently I have a system with NUC i9 12th gen, 64GB DDR4 3200 and 2x A4000, one in pcie x16 and other in pcie x4 in a ssd slot using occulink. If I load the unsloth Q2K_XXL gguf using koboldcpp and mmap, the prompt processing times are really, really bad. Like for 6K context, it takes about 30mins. Generation speed is about 1.5t/s.
So in hopes of fitting the model in ram to get better speeds, and maybe try bigger MoEs in future like deepseek, I wanted to get a system like in the picture. I'm a student so budget is extremely important. I will get in touch with the seller to check if I can connect gpus to this server, but if we're only talking about cpu and ram, what kind of performance can I expect of this? Would it be possible to get say ~5t/s for generation time once I max out the ram, which can go to 1.5TB and decent prompt processing speeds? Thank you.
| 2025-04-20T15:51:43 |
https://www.reddit.com/gallery/1k3pl36
|
lacerating_aura
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pl36
| false | null |
t3_1k3pl36
|
/r/LocalLLaMA/comments/1k3pl36/is_this_build_worth_investing/
| false | false | 0 | null |
|
Llama 4 - Slow Prompt Processing on Llama.cpp with partial offload
| 21 |
Playing with Maverick with the following command:
./llama-server -m maverick.gguf -c 16384 -ngl 99 -ot ".\*ffn\_.\*\_exps.\*=CPU"
In theory this loads the \~14B worth of shared tensors onto the gpu,
And leaves the \~384B worth of MoE experts on the CPU.
At inference time all 14B on the GPU is active + 3B worth of experts from the CPU.
Generation speed is great at 25T/s
However prompt processing speed is 18T/s,
I've never seen Prefill slower than generation, so feels like I'm doing something wrong...
Doing a little messing around I realized I could double my Prefill speed by switching from pcie gen3 to gen4, also cpu apear mostly idle while doing prefill.
Is there a command that will tell Llama.cpp to do the prefill for the CPU layers on CPU?
Any other tweaks to get faster prefill?
This is Llama.cpp, 1 RTX3090, and a 16 core 7F52 Epyc (DDR4)
Ktransformers already does something like this and gets over 100T/s prefill on this model and hardware,
But I'm running into a bug where it loses it's mind at longer context lengths.
| 2025-04-20T15:52:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3plzq/llama_4_slow_prompt_processing_on_llamacpp_with/
|
Conscious_Cut_6144
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3plzq
| false | null |
t3_1k3plzq
|
/r/LocalLLaMA/comments/1k3plzq/llama_4_slow_prompt_processing_on_llamacpp_with/
| false | false |
self
| 21 | null |
Math ??
| 1 |
[removed]
| 2025-04-20T15:52:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3pm0y/math/
|
Specific_Condition80
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pm0y
| false | null |
t3_1k3pm0y
|
/r/LocalLLaMA/comments/1k3pm0y/math/
| false | false |
self
| 1 | null |
Hey guys nice to meet you all! I'm new here but wanted some assistance!
| 0 |
I have a 7950xt and a 6900xt red devil with 128 gb ram. I got ubuntu and im running a ROCm docker image that allow me to run Ollama with support for my GPU.
The docker command i will share below:
sudo docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
I use VS code as my IDE and installed Continue along with a number of models.
Here is the issue, i see videos of people showing Continue and things are all always... fast? Like, smooth and fast? Like you were using cursor with claude.
Mine is insanely slow. It's slow to edit things, its slow to produce answer and can get even further beyond slow if i prompt something big.
This behavior is observed in pretty much all coding models I tried. For consistency im going to use this model as reference:
Yi-Coder:Latest
Is there any tip that i could use to make the most out of my models? Maybe a solution without ollama? I have 128 gb ram and i think i could be using that to leverage some speed somehow.
Thank you in advance!
| 2025-04-20T15:55:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3pohq/hey_guys_nice_to_meet_you_all_im_new_here_but/
|
charlescleivin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pohq
| false | null |
t3_1k3pohq
|
/r/LocalLLaMA/comments/1k3pohq/hey_guys_nice_to_meet_you_all_im_new_here_but/
| false | false |
self
| 0 | null |
OOM on T4 and A4000 while fine-tuning LLaMA 3.2-1B
| 1 |
[removed]
| 2025-04-20T16:05:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3pw3t/oom_on_t4_and_a4000_while_finetuning_llama_321b/
|
ChimSau19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pw3t
| false | null |
t3_1k3pw3t
|
/r/LocalLLaMA/comments/1k3pw3t/oom_on_t4_and_a4000_while_finetuning_llama_321b/
| false | false |
self
| 1 | null |
Intel releases AI Playground software for generative AI as open source
| 199 |
Announcement video: https://www.youtube.com/watch?v=dlNvZu-vzxU
**Description**
AI Playground open source project and AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU. AI Playground leverages libraries from GitHub and Huggingface which may not be available in all countries world-wide. AI Playground supports many Gen AI libraries and models including:
- Image Diffusion: Stable Diffusion 1.5, SDXL, Flux.1-Schnell, LTX-Video
- LLM: Safetensor PyTorch LLMs - DeepSeek R1 models, Phi3, Qwen2, Mistral, GGUF LLMs - Llama 3.1, Llama 3.2: OpenVINO - TinyLlama, Mistral 7B, Phi3 mini, Phi3.5 mini
| 2025-04-20T16:05:18 |
https://github.com/intel/AI-Playground
|
Balance-
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3pw8n
| false | null |
t3_1k3pw8n
|
/r/LocalLLaMA/comments/1k3pw8n/intel_releases_ai_playground_software_for/
| false | false | 199 |
{'enabled': False, 'images': [{'id': 'zQAl9NtfYdPDeTY1skKw3oP9dTrVw_hpGQh3B-j-bJI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=108&crop=smart&auto=webp&s=bddee9e107a56e2c1d2c5854e0a53daee1afa069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=216&crop=smart&auto=webp&s=7bf3813a1b82d811bbe64538482bc1c7862ace6e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=320&crop=smart&auto=webp&s=4bc60ad61ae440ae97d0c7e2c0a31cfd5e5b589e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=640&crop=smart&auto=webp&s=cb487f61c95f7de558ca1ddf5ec1b62010b36b5e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=960&crop=smart&auto=webp&s=678167f1b10247ee5d71a5f32e1ef4905a02149a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?width=1080&crop=smart&auto=webp&s=6f102f08b2621499459a06456cb917f46d667be5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UgxZ6n5LChFWd0HyYltaWavgJZl1YXoXz0YJ03N3rv0.jpg?auto=webp&s=c239296ddadb4e05020f5136d7f7c9b6343c83a4', 'width': 1200}, 'variants': {}}]}
|
|
Show LocalLLaMA: Swarm Debugging with MCP
| 1 |
[removed]
| 2025-04-20T16:09:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3q003/show_localllama_swarm_debugging_with_mcp/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3q003
| false | null |
t3_1k3q003
|
/r/LocalLLaMA/comments/1k3q003/show_localllama_swarm_debugging_with_mcp/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'iKoHZrkUD0aPQK-UqsxowlyA4RK8oLb7OUJ4jYM8mL0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=108&crop=smart&auto=webp&s=b8e6a5e3906ab2de70324de1f89bd412bd70265a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=216&crop=smart&auto=webp&s=b32884c8db0973c5fc9fb6d1ba3e045c647badd3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=320&crop=smart&auto=webp&s=6cd70ec8ec71f61e19b48a3d212eef579db7fab2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=640&crop=smart&auto=webp&s=2898dd5770bd6d68ce974820fa4a859316fab4e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=960&crop=smart&auto=webp&s=6091195e73c5c239460223e33f6733a813dee1e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?width=1080&crop=smart&auto=webp&s=5b10a869080b0f098ca2f93564c77874a565cf85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WZG06Ii_qnjttHCuHKFFzTs5dGJ2cQMe0PG7zGh3Dlk.jpg?auto=webp&s=4ed51fe5c0077a717a37f81dbf0061301c196227', 'width': 1200}, 'variants': {}}]}
|
FULL LEAKED Windsurf Agent System Prompts and Internal Tools
| 6 |
(Latest system prompt: 20/04/2025)
I managed to get the full official Windsurf Agent system prompts, including its internal tools (JSON). Over 200 lines. Definitely worth to take a look.
You can check it out at: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
| 2025-04-20T16:59:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3r3eo/full_leaked_windsurf_agent_system_prompts_and/
|
Independent-Box-898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3r3eo
| false | null |
t3_1k3r3eo
|
/r/LocalLLaMA/comments/1k3r3eo/full_leaked_windsurf_agent_system_prompts_and/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': 'XPj97HeotBMzO0qSpHz-f0g5fzb-EMmJ8flCdg8qOSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=108&crop=smart&auto=webp&s=f33349ce146f4e94da825b2e35835d3fe3a44d7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=216&crop=smart&auto=webp&s=c19f539d3c063899cf616df31fa6e19a5964f49d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=320&crop=smart&auto=webp&s=e8cb265249bf41f87f151374132df373a0d46931', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=640&crop=smart&auto=webp&s=9a980fcdd0e33f7e6e1a15f521496aa4094f20c7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=960&crop=smart&auto=webp&s=155ffb4f53124ce9e49cad567c5392a684644f21', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?width=1080&crop=smart&auto=webp&s=79fdeac9a53ae75e400d025eec1017da3faede6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Juxe_uxLMFaLQj4xaNxDTADlfdS0z2O__5bbgKgqdnY.jpg?auto=webp&s=3202602b92aaea504d823881c6cd50a133e2be14', 'width': 1200}, 'variants': {}}]}
|
What are your favorite models for professional use?
| 9 |
Looking for some decent 8b or 14b models for professional use. I don't do a lot of coding, some accounting and data analytics, but mostly need it to roleplay as a professional, write emails, give good advice.
| 2025-04-20T17:40:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3s1cf/what_are_your_favorite_models_for_professional_use/
|
intimate_sniffer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3s1cf
| false | null |
t3_1k3s1cf
|
/r/LocalLLaMA/comments/1k3s1cf/what_are_your_favorite_models_for_professional_use/
| false | false |
self
| 9 | null |
Lm studio model to create spicy prompts to rival Spicy Flux Prompt Creator
| 0 |
Currently I use Spicy Flux Prompt Creator in chatgpt to create very nice prompts for my image gen workflow. This tool does a nice job of being creative and outputting some really nice prompts but it tends to keep things pretty PG-13. I recently started using LM studio and found some uncensored models but Im curious if anyone has found a model that will allow me to create prompts as robust as the gpt spicy flux. Does anyone have any advice or experience with such a model inside LM studio?
| 2025-04-20T17:59:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3sgxn/lm_studio_model_to_create_spicy_prompts_to_rival/
|
Shyt4brains
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3sgxn
| false | null |
t3_1k3sgxn
|
/r/LocalLLaMA/comments/1k3sgxn/lm_studio_model_to_create_spicy_prompts_to_rival/
| false | false |
self
| 0 | null |
Rate my build
| 1 |
[removed]
| 2025-04-20T18:11:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3sqpu/rate_my_build/
|
Turnipbeater666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3sqpu
| false | null |
t3_1k3sqpu
|
/r/LocalLLaMA/comments/1k3sqpu/rate_my_build/
| false | false |
self
| 1 | null |
What OS are you ladies and gent running?
| 27 |
It seems to me there are a lot of Mac users around here. Let’s do some good old statistics.
[View Poll](https://www.reddit.com/poll/1k3t3wl)
| 2025-04-20T18:27:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3t3wl/what_os_are_you_ladies_and_gent_running/
|
No-Report-1805
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3t3wl
| false | null |
t3_1k3t3wl
|
/r/LocalLLaMA/comments/1k3t3wl/what_os_are_you_ladies_and_gent_running/
| false | false |
self
| 27 | null |
Drop-In main.py for Open WebUI – Adds Memory, Time Awareness, and Personality Support
| 1 |
[removed]
| 2025-04-20T18:35:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3taaj/dropin_mainpy_for_open_webui_adds_memory_time/
|
Affectionate-Yak-308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3taaj
| false | null |
t3_1k3taaj
|
/r/LocalLLaMA/comments/1k3taaj/dropin_mainpy_for_open_webui_adds_memory_time/
| false | false |
self
| 1 | null |
RX 7900 XTX vs RTX 3090 for a AI 'server' PC. What would you do?
| 2 |
Last year I upgraded my main PC which has a 4090. The old hardware (8700K, 32GB DDR-4) landed in a second 'server' PC with no good GPU at all. Now I plan to upgrade this PC with a solid GPU for AI only.
My plan is to run a chatbot on this PC, which would then run 24/7, with KoboldCPP, a matching LLM and STT/TTS, maybe even with a simple Stable Diffision install (for better I have my main PC with my 4090). Performance would also be important to me to minimise latency.
Of course, I would prefer to have a 5090 or something even more powerful, but as I'm not swimming in money, the plan is to invest a maximum of 1100 euros (which I'm still saving). You can't get a second-hand 4090 for that kind of money at the moment. A 3090 would be a bit cheaper, but only second-hand. An RX 7900 XTX, on the other hand, would be available new with warranty.
That's why I'm currently thinking back and forth. The second-hand market is always a bit risky. And AMD is catching up more and more with NVidia Cuda with ROCm 6.x and software support seems also to get better. Even if only with Linux, but that's not a problem with a ‘server’ PC.
Oh, and for buying a second card beside my 4090, not possible with my current system, not enough case space, a mainboard that would only support PCIe 4x4 on a second card. So I would need to spend here a lot lot more money to change that. Also I always want a extra little AI PC.
The long term plan is to upgrade the hardware of the extra AI PC for it's purpose.
So what would you do?
| 2025-04-20T18:36:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3tblk/rx_7900_xtx_vs_rtx_3090_for_a_ai_server_pc_what/
|
Blizado
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3tblk
| false | null |
t3_1k3tblk
|
/r/LocalLLaMA/comments/1k3tblk/rx_7900_xtx_vs_rtx_3090_for_a_ai_server_pc_what/
| false | false |
self
| 2 | null |
Anyone running a 2 x 3060 setup? Thinking through upgrade options
| 3 |
I'm trying to think through best options to upgrade my current setup in order to move up a "class" of local models to run more 32B and q3-4 70B models, primarily for my own use. Not looking to let the data leave the home network for OpenRouter, etc.
I'm looking for input/suggestions with a budget of around $500-1000 to put in from here, but I don't want to blow the budget unless I need to.
Right now, I have the following setup:
|Main Computer:|Inference and Gaming Computer|
|:-|:-|
|Base M4 Mac (16gb/256)|3060 12G + 32G DDR4 (in SFF case)|
I can resell the base M4 mac mini for what I paid for it (<$450), so it's essentially a "trial" computer.
|Option 1: move up the Mac food chain|Option 2: 2x 3060 12GB|Option 3: get into weird configs and slower t/s|
|:-|:-|:-|
|M4 Pro 48gb (32gb available for inference) or M4 Max 36gb (24gb available for inference).|Existing Pc with one 3060 would need new case, PSU, & motherboard (24gb Vram at 3060 speeds)|M4 (base) 32gb RAM (24 gb available for inference)|
|net cost of +$1200-1250, but it does improve my day-to-day PC|around +$525 net, would then still use the M4 mini for most daily work|Around +$430 net, might end up no more capable than what I already have, though|
**What would you suggest from here?**
Is there anyone out there using a 2 x 3060 setup and happy with it?
| 2025-04-20T18:39:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3tdyt/anyone_running_a_2_x_3060_setup_thinking_through/
|
amusiccale
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3tdyt
| false | null |
t3_1k3tdyt
|
/r/LocalLLaMA/comments/1k3tdyt/anyone_running_a_2_x_3060_setup_thinking_through/
| false | false |
self
| 3 | null |
75,000 tokens per second
| 1 |
[removed]
| 2025-04-20T19:19:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3u94a/75000_tokens_per_second/
|
Professional_Sea_807
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3u94a
| false | null |
t3_1k3u94a
|
/r/LocalLLaMA/comments/1k3u94a/75000_tokens_per_second/
| false | false |
self
| 1 | null |
SOTA Quantitative Spatial Reasoning Performance from 3B VLM
| 29 |
Updated **SpaceThinker** docs to include a live demo, .gguf weights, and evaluation using [Q-Spatial-Bench](https://andrewliao11.github.io/spatial_prompt/)
This 3B VLM scores on par with the closed, frontier model APIs compared in the project.
**Space:** [https://huggingface.co/spaces/remyxai/SpaceThinker-Qwen2.5VL-3B](https://huggingface.co/spaces/remyxai/SpaceThinker-Qwen2.5VL-3B)
**Model:** [https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B](https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B)
**Colab:** [https://colab.research.google.com/drive/1buEe2QC4\_pnrJwQ9XyRAH7RfaIa6pbex?usp=sharing](https://colab.research.google.com/drive/1buEe2QC4_pnrJwQ9XyRAH7RfaIa6pbex?usp=sharing)
| 2025-04-20T19:38:37 |
https://www.reddit.com/gallery/1k3unyo
|
remyxai
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3unyo
| false | null |
t3_1k3unyo
|
/r/LocalLLaMA/comments/1k3unyo/sota_quantitative_spatial_reasoning_performance/
| false | false | 29 | null |
|
What’s Your Go-To Local LLM Setup Right Now?
| 54 |
I’ve been experimenting with a few models for summarizing Reddit/blog posts and some light coding tasks, but I keep getting overwhelmed by the sheer number of options and frameworks out there.
| 2025-04-20T19:40:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3up8v/whats_your_goto_local_llm_setup_right_now/
|
techblooded
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3up8v
| false | null |
t3_1k3up8v
|
/r/LocalLLaMA/comments/1k3up8v/whats_your_goto_local_llm_setup_right_now/
| false | false |
self
| 54 | null |
Is anyone using llama swap with a 24GB video card? If so, can I have your config.yaml?
| 4 |
I have an RTX3090 and just found llama swap. There are so many different models that I want to try out, but coming up with all of the individual parameters is going to take a while and I want to get on to building against the latest and greatest models ASAP! I was using gemma3:27b on ollama and was getting pretty good results. I'd love to have more top-of-the-line options to try with.
Thanks!
| 2025-04-20T19:40:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3uph1/is_anyone_using_llama_swap_with_a_24gb_video_card/
|
randomsolutions1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3uph1
| false | null |
t3_1k3uph1
|
/r/LocalLLaMA/comments/1k3uph1/is_anyone_using_llama_swap_with_a_24gb_video_card/
| false | false |
self
| 4 | null |
Llama gaslighting me about its image generation capabilities
| 0 |
My partner and I were having a discussion about the legal rights to AI generated artwork, and I thought it would be interesting to hear an AI's perspective...
| 2025-04-20T20:13:05 |
https://www.reddit.com/gallery/1k3veac
|
Ok_Professi
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3veac
| false | null |
t3_1k3veac
|
/r/LocalLLaMA/comments/1k3veac/llama_gaslighting_me_about_its_image_generation/
| false | false | 0 | null |
|
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does the model know when a sequence ends?
| 1 |
[removed]
| 2025-04-20T20:13:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3vedz/why_would_the_tokenizer_for_encoderdecoder_model/
|
Franck_Dernoncourt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3vedz
| false | null |
t3_1k3vedz
|
/r/LocalLLaMA/comments/1k3vedz/why_would_the_tokenizer_for_encoderdecoder_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2mgQzMAfUZI5FNMapSQtO2fPB16hJMuok29TS7JeIo4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=108&crop=smart&auto=webp&s=f03e58db196c3b798c74a55aecb04638da790242', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=216&crop=smart&auto=webp&s=4077db0df0f97d9a4450fa285a40d1eaa3851033', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=320&crop=smart&auto=webp&s=23e4bde72ee596fa04c2e911ca45f997c7741c8a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=640&crop=smart&auto=webp&s=118119732f844afc94521624bb7dcca9b38a12f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=960&crop=smart&auto=webp&s=0e7862ab8748d301640c9d957df4596093c1817b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=1080&crop=smart&auto=webp&s=5e83dff22b9b7093d44a70d21382c8830145705b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?auto=webp&s=1113c9832bfae3543106c3a95b181e76ba6ae198', 'width': 1200}, 'variants': {}}]}
|
Introducing The Advanced Cognitive Inoculation Prompt (ACIP)
| 0 |
I created this prompt and wrote the following article explaining the background and thought process that went into making it:
[https://fixmydocuments.com/blog/08\_protecting\_against\_prompt\_injection](https://fixmydocuments.com/blog/08_protecting_against_prompt_injection)
Let me know what you guys think!
| 2025-04-20T21:07:16 |
https://github.com/Dicklesworthstone/acip
|
dicklesworth
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3wjug
| false | null |
t3_1k3wjug
|
/r/LocalLLaMA/comments/1k3wjug/introducing_the_advanced_cognitive_inoculation/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 's4URX3QoViGIfesWpeea2rr9rGE5wD0J6d0N_9TAQp0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=108&crop=smart&auto=webp&s=2114213587c0019d426519d9c8a150d5e8f8b689', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=216&crop=smart&auto=webp&s=f3e321645a10809b7cdf6ac2d2e388504b08070b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=320&crop=smart&auto=webp&s=df171b412f194770118321a0ad57eed809bc83f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=640&crop=smart&auto=webp&s=612c72baa8383e0be6442e0493767f2368be5f42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=960&crop=smart&auto=webp&s=9ff58fe097990a1a6531e27df1b58668f17ab077', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?width=1080&crop=smart&auto=webp&s=2039e397d089e83d2846f1537844fd771fb9fcc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lqt8O7Z_8qa6mOkhYk9gq8tgkdqFFeMOrAy0dqJOSoI.jpg?auto=webp&s=345c1ddd811627186ca5f738e083e76a8e6bb5d8', 'width': 1200}, 'variants': {}}]}
|
|
Gemma 3 with Donald prompt. I'm starting to get scared to ask anything🫣
| 0 | 2025-04-20T21:09:33 |
Illustrious-Dot-6888
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3wln1
| false | null |
t3_1k3wln1
|
/r/LocalLLaMA/comments/1k3wln1/gemma_3_with_donald_prompt_im_starting_to_get/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '6BL83Cm08uvq8HcFyOyDN_dJvb2tcZAi_2Vq1RpUMPc', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=108&crop=smart&auto=webp&s=658213f66fec437082233f4e939e40391a7029c0', 'width': 108}, {'height': 373, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=216&crop=smart&auto=webp&s=32bd14994c2b48bfbfe8023fa9afdf24bb9c62dc', 'width': 216}, {'height': 553, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=320&crop=smart&auto=webp&s=f1145871e79942d76a360e0da93f3e76d44e7a64', 'width': 320}, {'height': 1106, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=640&crop=smart&auto=webp&s=a9dc4f56232ded4a2aea68748cee5626f45bc10b', 'width': 640}, {'height': 1659, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=960&crop=smart&auto=webp&s=8e607382c567fb08a243c24841739f110fa6390c', 'width': 960}, {'height': 1867, 'url': 'https://preview.redd.it/zkixp37622we1.png?width=1080&crop=smart&auto=webp&s=93cc5ce91b166ce88fdf62d918e963a83e4dcdf0', 'width': 1080}], 'source': {'height': 1867, 'url': 'https://preview.redd.it/zkixp37622we1.png?auto=webp&s=029a56af0f0ec92e0fc10f02a9c0f3c0c1c5cd4d', 'width': 1080}, 'variants': {}}]}
|
|||
nsfw orpheus early v1
| 352 |
[https://huggingface.co/MrDragonFox/mOrpheus\_3B-1Base\_early\_preview](https://huggingface.co/MrDragonFox/mOrpheus_3B-1Base_early_preview)
| 2025-04-20T21:21:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3wuud/nsfw_orpheus_early_v1/
|
MrAlienOverLord
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3wuud
| false | null |
t3_1k3wuud
|
/r/LocalLLaMA/comments/1k3wuud/nsfw_orpheus_early_v1/
| false | false |
nsfw
| 352 |
{'enabled': False, 'images': [{'id': '3t_QovEs8eVQ3XSrD5tXR4TF1nZpXQmhgFFtxZxFAEs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=108&crop=smart&auto=webp&s=fe471d9418916e360027792e176939a1c6e9f580', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=216&crop=smart&auto=webp&s=d146d4befeb957c915f9ba50ab298fa056bd1aa4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=320&crop=smart&auto=webp&s=e5b7ce2d429db6bb371c5946ff29aa7842216a38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=640&crop=smart&auto=webp&s=8c3326e3295d2876fa69d9560483cd485a395c32', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=960&crop=smart&auto=webp&s=cbe1d3004808dab3e982c275cade2cd6d0e8fd56', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=1080&crop=smart&auto=webp&s=458453cb95e834e7317e8db5a465bcf29024f3c7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?auto=webp&s=6cb34b2ce2959b59d58740b681fc927985bb2f6a', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=9b9d1cbc14d30eb51b440e60897fca4e5e9f91ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=358796f10b3b0c3c7b51a9a64485677506b67eae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=cc4bd5cf730116534858c344372cd59c6d035b5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6ca2a3b3a4a195dfe956cd5998c0bc6fb31bfd35', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b8ac43174eb897ba7d8b5effeea056c5ee489aff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=f88f030ab373c4a3994d0c568ab7ea4339b781ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?blur=40&format=pjpg&auto=webp&s=6b105602d9dcea69542b4e6b368ea4055dee43b2', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=9b9d1cbc14d30eb51b440e60897fca4e5e9f91ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=358796f10b3b0c3c7b51a9a64485677506b67eae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=cc4bd5cf730116534858c344372cd59c6d035b5a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6ca2a3b3a4a195dfe956cd5998c0bc6fb31bfd35', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b8ac43174eb897ba7d8b5effeea056c5ee489aff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=f88f030ab373c4a3994d0c568ab7ea4339b781ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uCCTQJ_fZAjQ0OJCX_gpvzBe23i9I9JluJvpbDiEiec.jpg?blur=40&format=pjpg&auto=webp&s=6b105602d9dcea69542b4e6b368ea4055dee43b2', 'width': 1200}}}}]}
|
Control Your Spotify Playlist with an MCP Server
| 3 |
Do you ever feel like Spotify doesn’t understand your mood or keeps playing the same old songs? What if I told you that you could talk to your Spotify, ask it to play songs based on your mood, and even create a queue of songs that truly resonate with you?
In this tutorial, we will integrate a Spotify MCP server with the Claude Desktop application. This step-by-step guide will teach you how to install the application, set up the Spotify API, clone Spotify MCP server, and seamlessly integrate it into Claude Desktop for a personalized and dynamic music experience.
| 2025-04-20T22:05:06 |
https://www.kdnuggets.com/control-spotify-playlist-with-mcp-server
|
kingabzpro
|
kdnuggets.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3xs55
| false | null |
t3_1k3xs55
|
/r/LocalLLaMA/comments/1k3xs55/control_your_spotify_playlist_with_an_mcp_server/
| false | false |
default
| 3 | null |
Built a new gaming rig and want turn my old one into an AI "server"
| 3 |
Hey friends! I recently finished building a new gaming rig and normally I try to sell my old components but this time I am thinking of turning it into a little home server to run some LLMs and Stable Diffusion, but I am completely new to this.
I don't wanna use my main rig because it's my work/gaming PC and I'd like to keep it separate, It needs to be accessible and ready 24/7 as I am on call at weird hours and so I don't want to mess with it, rather keep it stable and safe and not under heavy load unless necessary.
I've been lurking around here for a while and I've seen a few posts of folks with a similar setup but not the same and I was wondering if, _reallistically_, be able to do anything decent with it. I have low expectations and I don't mind if things are slow, but if the outputs are not gonna be good then I'd rather sell and offset of expense from the new machine.
Here are the specs:
- ROG Strix B450-F Gaming (AM4) https://rog.asus.com/motherboards/rog-strix/rog-strix-b450-f-gaming-model/
- Ryzen 7 5800X: https://www.amd.com/en/products/processors/desktops/ryzen/5000-series/amd-ryzen-7-5800x.html
- DDR4 32GB (3200mhz) RAM: https://www.teamgroupinc.com/en/product-detail/memory/T-FORCE/vulcan-z-ddr4-gray/vulcan-z-ddr4-gray-TLZGD432G3200HC16CDC01/
- Radeon RX 6950XT (16GB): https://www.amd.com/en/products/graphics/desktops/radeon/6000-series/amd-radeon-rx-6950-xt.html
That being said, I'd be willing to spend _some_ money on it but not too much, maybe upgrade the RAM or something like that but I've already spent quite a bit on the new machine and can't do much more than that.
What do you think?
| 2025-04-20T22:06:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3xsvz/built_a_new_gaming_rig_and_want_turn_my_old_one/
|
phoenixdow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3xsvz
| false | null |
t3_1k3xsvz
|
/r/LocalLLaMA/comments/1k3xsvz/built_a_new_gaming_rig_and_want_turn_my_old_one/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': '1z0mrE1cwFWk1vYrM5Ty4qZZBuL-uV2MakVNlPLr0Ac', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=108&crop=smart&auto=webp&s=22b7068510ab9c03a109bbe439056e9cb929c6d1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=216&crop=smart&auto=webp&s=1bc5e0b2b9b5fcec45d4576097c9d52ec9603d36', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=320&crop=smart&auto=webp&s=56ca9bdcc8356032c94f182d611a795fbec3bf2f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=640&crop=smart&auto=webp&s=93f1262616d70617edefd2dadd608759edcac80c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=960&crop=smart&auto=webp&s=52a548aba8beba50ae11c8880a491dc147355cad', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?width=1080&crop=smart&auto=webp&s=4b91116d37b55859535f6f4c73c4b86b43450e9e', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/ZZ-QFVbO8CVd2kgAxEiH-JhBkzILU-MLgtlVT0i5m04.jpg?auto=webp&s=ddeb81c5487dec2e942fa0f0ce51effa633ec150', 'width': 2000}, 'variants': {}}]}
|
I created a client-side AI research web app (fyi, auto-downloads gemma3:1b when opened)
| 0 |
As a distraction from finishing updates to my "real" research script (I continue to struggle with the mechanics behind correlation of in-text citations), I thought it would be fun to make a bastardized version that [fit in a basic web page](https://i.imgur.com/S2yIqX6.png), used transformers.js to ape the process with a tiny model, and doesn't need to cite its sources because it's generating slop anyway. It takes about a half hour to complete a given research instance - note it will ask for input on its research list 5-10 minutes in, so don't tab away and forget about it until after that - and spends the entire time frozen with misleading status messages. Here's some slop it wrote:
\---
Okay, let's craft a response that thoroughly addresses the query \[EDITOR'S NOTE: "how to democratize generating ai slop"\], incorporating the various research materials and aiming for a well-structured and comprehensive answer.
**Response to the Query: How to Democratize AI Sloping?**
**Introduction:**
The rapid advancement of Artificial Intelligence (AI) is profoundly changing the technological landscape. "Democratizing AI," the idea that more people and organizations have influence over the *safe and ethical* deployment of AI, is increasingly vital. While the rise of AI is undeniably exciting, it also presents significant challenges—from potential biases to the risks of misuse—which necessitate proactive consideration and, increasingly, democratic control. This response delves into the complexities of democratizing AI, exploring the multifaceted approaches, challenges, and potential solutions.
**Understanding the Foundation: Diverse Approaches and Research**
The initial research points to a multi-layered approach to democratizing AI, recognizing that simply "making" it accessible is insufficient.
* **The Role of Fine-Tuning and Retrieval Augmented Generation (RAG):**
* **The Challenge:** Traditional AI models require massive, often proprietary datasets. Democratization seeks to reduce the dependence on these datasets.
***RAG & Fine-Tuning:*** *Research like \[Source 1\](https://www.ibm.com/think/democratizing-ai) highlights RAG. It involves equipping AI models with* external\* knowledge—like the research from the paper, by making them aware of the world and their limitations. This drastically reduces reliance on the models’ own internal knowledge and allows for specialized, context-dependent applications with less data training. This approach is directly linked to increasing capabilities for more informed and accessible AI.
* Further exploration related to large language model techniques would involve utilizing retrieval augmented generation(RAG) techniques, allowing users to build on AI's output.
* **Open Source Accessibility Testing Tools:**
* **Digital A11y (Source 2):** This highlights the critical importance of accessibility. A11yWatch is an example of tooling focused specifically on web accessibility. It actively works to uncover and fix accessibility (including A11y) issues. Asquatasun, a powerful system for web accessibility analysis, showcases the technical development within this space as well.
* **The Impact of Ethical Considerations:**
* **Cross-Referencing Research:** The research from \[Source 3\] "The four kinds of democratization... " provides a foundational framework. This paper, along with studies into ethical AI use, are central to understanding the need for careful consideration.
* **Ethical Concerns and Challenges:**
* **Fairness and Bias:** AI models are trained on historical data, which often contains harmful biases. Over-reliance on these biases can perpetuate inequalities. More research is needed into specific methods to mitigate bias.
**Current Limitations and Ongoing Research** This is where the need for continued research remains vital.
* **Defining "Democracy":** The term itself is broad. True democratic access goes beyond simply providing access. It requires understanding, skill, and an understanding of the ethical implications.
* **The Interplay Between Technical & Ethical Dimensions:** The most effective models achieve a balance.
**Recommendations for Further Action:**
* **Multi-Stakeholder Collaboration:** Democratization requires collaboration between AI developers, ethicists, policymakers, and the public.
* **Education & Training:** It is critical to provide accessible, low-level education on AI basics and accessibility principles.
**Conclusion:**
Democratizing AI is a complex, ongoing process. By embracing techniques like RAG and making tools like Accessibility testing available, and building a robust ethical framework – and recognizing the limitations of our current development - greater community influence and broader accessibility can be achieved. The focus will need to shift toward inclusivity, transparency, and accountability
\---
**Note:** I've organized the response to be presented as a coherent narrative, demonstrating understanding of the core concepts and providing a clear pathway for further exploration.
Do you want me to elaborate on any specific aspect, provide additional citations, or refine the writing style?
\---
*Research conducted on: 4/20/2025*
| 2025-04-20T22:26:58 |
http://slopsear.ch/
|
atineiatte
|
slopsear.ch
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3y8j4
| false | null |
t3_1k3y8j4
|
/r/LocalLLaMA/comments/1k3y8j4/i_created_a_clientside_ai_research_web_app_fyi/
| false | false |
default
| 0 | null |
[Release] GPU Benchmark - Compare your Stable Diffusion performance globally
| 24 |
Hey everyone,
I just released **GPU Benchmark**, a simple open-source tool that measures how many Stable Diffusion images your GPU can generate in 5 minutes and compares your results with others worldwide on our leaderboard.
# What it does:
* Runs Stable Diffusion for exactly 5 minutes
* Counts how many images your GPU can generate
* Tracks GPU temperature (max and average)
* Anonymously submits results to a global leaderboard sorted by country
# Why I made this:
I was selling GPUs on eBay Kleinanzeigen and found the existing GPU health checks to be bad; specifically, there were no benchmark tools that specifically run on AI.
# Installation is super simple:
pip install gpu-benchmark
# And running it is even simpler:
gpu-benchmark
The benchmark takes about 5 minutes after initial model loading. You can view all results on our online [benchmark results](https://www.unitedcompute.ai/gpu-benchmark).
# Compatible with:
* Any CUDA-compatible NVIDIA GPU
* Python
* Requires internet for result submission (but you can run offline too)
I'd love to hear your feedback and see your results! Has anyone else been looking for something like this?
Check out the [project Github website](https://github.com/yachty66/gpu-benchmark) for more info as well.
*Note: This is completely free and open-source - just a tool I built because I thought the community might find it useful.*
| 2025-04-20T22:38:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3ygru/release_gpu_benchmark_compare_your_stable/
|
yachty66
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3ygru
| false | null |
t3_1k3ygru
|
/r/LocalLLaMA/comments/1k3ygru/release_gpu_benchmark_compare_your_stable/
| false | false |
self
| 24 | null |
AI yes or no?
| 0 |
Hey guys if you find this interesting message me for the prompt engineering and ai community :)
| 2025-04-20T22:39:00 |
https://v.redd.it/rbt21llyh2we1
|
MorgancWilliams
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3yh7h
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rbt21llyh2we1/DASHPlaylist.mpd?a=1747780762%2CMjc3MTgxNmZlNzBkNzQxZDdjYjY1YzM1NGNkYTlhZjRlMmUxYjUxMDZkNWUwZGI3YjkzMmRjZWVjYTk2NWQzMg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/rbt21llyh2we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/rbt21llyh2we1/HLSPlaylist.m3u8?a=1747780762%2CYTFiNWMyMWNlOTU4MzhhMDQ2ODhkMTRiMWMyMTBlMzUyOWQ3Y2RlN2IwMzJiMjQ5OThhN2ZjMjU5OGIzZDE1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rbt21llyh2we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k3yh7h
|
/r/LocalLLaMA/comments/1k3yh7h/ai_yes_or_no/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?width=108&crop=smart&format=pjpg&auto=webp&s=cee8558a488c46b51be15acd3cb88e68bf784342', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca42975b965ba6494a1b9619f28fdacb1c1fb129', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?width=320&crop=smart&format=pjpg&auto=webp&s=c7ef0047011bc32100f29f7ee0b17e8bcc9744b5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?width=640&crop=smart&format=pjpg&auto=webp&s=522ea50ee0f869884dc72f69d9c6f51ea6f3f497', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?width=960&crop=smart&format=pjpg&auto=webp&s=8857cd264640639cf83ca52a94f958cb92a62196', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/cDdtbDNvZnloMndlMWvpPhAgz0JgKrZ80G-_I07Uf2OzHrPHyVUvWfqSvFiO.png?format=pjpg&auto=webp&s=f0ea365d6008c2d2b70b71e3ae06a8bc07976cdf', 'width': 1024}, 'variants': {}}]}
|
|
Let me know your thoughts :)
| 0 |
Hey guys, my free Skool community has over 875 members posting about the latest and best chat gpt prompts and SAAS tools - Let me know if you’re interested :)
| 2025-04-20T22:53:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3yro2/let_me_know_your_thoughts/
|
MorgancWilliams
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3yro2
| false | null |
t3_1k3yro2
|
/r/LocalLLaMA/comments/1k3yro2/let_me_know_your_thoughts/
| false | false |
self
| 0 | null |
Usefulness of a single 3060 12gb
| 0 |
Is there anything useful i can actually do with 12gb vram? Should i harvest the 1060s from my kids computers? after staring long and hard and realizing that home LLM must be the reason why GPU prices are insane, not scalpers, I'm kinda defeated. I started with the idea to download DeepSeek R1 since it was open source, and then when i realized i would need 100k worth of hardware to run it, i kinda don't see the point. It seems that for text based applications, using smaller models might return "dumber" results for lack of a better term. and even then what could i gain from talking to an AI assistant anyway? The technology seems cool as hell, and I wrote a screenplay (i dont even write movies, chatgpt just kept suggesting it) with chatgpt online, fighting it's terrible memory the whole time. How can a local model running on like 1% of the hardware even compete?
The Image generation models seem much better in comparison. I can imagine something and get a picture out of Stable Diffusion with some prodding. I don't know if I really have much need for it though.
I don't code, but that sounds like an interesting application for sure. I hear that the big models even need some corrections and error checking, but if I don't know much about code, I would probably just create more problems for myself on a model that could fit on my card, if such a model exists.
I love the idea, but what do i even do with these things?
| 2025-04-20T23:34:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k3zlji/usefulness_of_a_single_3060_12gb/
|
EsotericAbstractIdea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k3zlji
| false | null |
t3_1k3zlji
|
/r/LocalLLaMA/comments/1k3zlji/usefulness_of_a_single_3060_12gb/
| false | false |
self
| 0 | null |
Why is Ollama butchering my "needle in haystack" tests?
| 10 |
Here is a prompt I'm giving to a bunch of LLMs to test their ability to retrieve a snippet of information from a large portion of text. The text itself is only about 18k-ish tokens.
[https://pastebin.com/32cgYjLZ](https://pastebin.com/32cgYjLZ)
When I put the prompt into Ollama, regardless of the model I use and \_even if\_ the model explicitly supports large context sizes (128k) and I use q8 quantizations, no LLM is ever able to give me the right answer.
However when tested through OpenRouter all the LLMs I test return the right answer: Llama 4 Scout, Phi 4, Gemma 3 12b, Gemma 3 27b, Llama 4 Maverick, Mistral Small, QwQ 32B, Nvidia Llama 3.3 Nemotron
| 2025-04-20T23:58:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k402rd/why_is_ollama_butchering_my_needle_in_haystack/
|
Jugg3rnaut
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k402rd
| false | null |
t3_1k402rd
|
/r/LocalLLaMA/comments/1k402rd/why_is_ollama_butchering_my_needle_in_haystack/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
A hump in the road
| 0 |
We will start with a bit of context.
Since December I have been experimenting with llms and got some impressive results, leading me to start doing things locally.
My current rig is;
Intel 13700k
Ddr4 3600mhz
Aorus Master 3080 10gb
Alphacool Eiswolf 2 Watercooler AIO for Aorus 3080/3090
BeQuiet! Straight power 11 platinum 1200w
Since bringing my projects local in February I have had impressive performance, mixtral 8x7b instruct q4km running as much as 22-25 tokens per second and mistral small q4_0 even reaching 8-15 tokens per second.
Having moved on to flux.1 dev I was rather impressed to be reaching near photorealism within a day of tweaking, and moving on to image to video workflows, wan2.1 14b q3k i2v was doing a great job need nothing more than some tweaking.
Running wan i2v I started having oom errors which is to be expected with the workloads I am doing. Image generation is 1280x720p and i2v was 720x480p. After a few runs of i2v I decided to rearrange my office. After unplugging my PC and letting it sit for an hour, the first hour it had been off for over 48 hours, during which it was probably more than 80% full load on GPU (350w stock bios).
When I moved my computer I noticed a burning electronics smell. For those of you who don't know this smell I envy you. I went to turn my PC back on and it did the tell tale half a second to maybe max a whole second flash on then straight shut down.
Thankfully I have 5 year warranty on the PSU and still have the receipt. Let this be a warning to other gamers that are crossing into the realms of llms. I game at 4k ultra and barely ever see 300w. Especially not a consistent load at that. I can't remember the last game that did 300w+ it happens that rarely. Even going to a higher end German component I was not safe.
Moral of the story. I knew this would happen. I thought it would be the GPU first. I'm glad it's not. Understand that for gaming level hardware this is abuse.
| 2025-04-21T00:16:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k40fvz/a_hump_in_the_road/
|
CybaKilla
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k40fvz
| false | null |
t3_1k40fvz
|
/r/LocalLLaMA/comments/1k40fvz/a_hump_in_the_road/
| false | false |
self
| 0 | null |
Which LLM Model Should I Use for My Tutoring Assistant?
| 5 |
Hi everyone,
I’m a university student looking to create a tutoring assistant using large language models (LLMs). I have an NVIDIA GPU with 8GB of VRAM and want to use it to upload my lecture notes and bibliographies. The goal is to generate summaries, practice questions, and explanations for tough concepts.
Given the constraints of my hardware, which LLM model would you recommend?
Thanks in advance! 🙏
| 2025-04-21T01:42:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k422l0/which_llm_model_should_i_use_for_my_tutoring/
|
Reasonable_Ad3196
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k422l0
| false | null |
t3_1k422l0
|
/r/LocalLLaMA/comments/1k422l0/which_llm_model_should_i_use_for_my_tutoring/
| false | false |
self
| 5 | null |
Which drawing do you think is better? What does your LLM output?
| 57 |
What output do you get when asking an LLM to draw a face with matplotlib? Any tips or techniques you’d recommend for better results?
| 2025-04-21T01:43:36 |
BlaiseLabs
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k423l6
| false | null |
t3_1k423l6
|
/r/LocalLLaMA/comments/1k423l6/which_drawing_do_you_think_is_better_what_does/
| false | false | 57 |
{'enabled': True, 'images': [{'id': 'K7Ccd8HoqAmj_2kWZm1Yvro6kVpGTF0NkZIsWdEivJc', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=108&crop=smart&auto=webp&s=df6d346f49da878d589587a7170a593539ba038f', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=216&crop=smart&auto=webp&s=5a15eec32daf6f215002037996d10191d12d464b', 'width': 216}, {'height': 315, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=320&crop=smart&auto=webp&s=2cab95e2c843f760c69e94d1dbef17feceaa33dc', 'width': 320}, {'height': 631, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=640&crop=smart&auto=webp&s=a81ff88fa2df5f09153edaca6969a11f10e9caa1', 'width': 640}, {'height': 946, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=960&crop=smart&auto=webp&s=a7256da0f480d267eb1f627e02d44ba1be28b026', 'width': 960}, {'height': 1065, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?width=1080&crop=smart&auto=webp&s=0ccd0c474ba13a5aa3af4b826176f05cc061ed83', 'width': 1080}], 'source': {'height': 2346, 'url': 'https://preview.redd.it/vmsd8uf2f3we1.jpeg?auto=webp&s=a1b5ea6d4e9a3e5d698e25db5c1355e77058c309', 'width': 2379}, 'variants': {}}]}
|
||
is this performance good ?
| 1 |
[removed]
| 2025-04-21T02:21:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k42t1k/is_this_performance_good/
|
Equal_Necessary9584
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k42t1k
| false | null |
t3_1k42t1k
|
/r/LocalLLaMA/comments/1k42t1k/is_this_performance_good/
| false | false |
self
| 1 | null |
Best model for a 5090
| 3 |
I managed to get lucky and purchased a 5090. Last time I played with local models was when they first released and I ran a 7B model on my old 8GB GPU. Since upgrading I want to revisit and use the 32GB VRAM to it's full capacity. What local models do you recommend for things like coding and automation?
| 2025-04-21T02:41:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k436ou/best_model_for_a_5090/
|
Nomski88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k436ou
| false | null |
t3_1k436ou
|
/r/LocalLLaMA/comments/1k436ou/best_model_for_a_5090/
| false | false |
self
| 3 | null |
I have created a discussion group about MCP AI AGENT. If you are interested, you can join it
| 1 |
[removed]
| 2025-04-21T02:52:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k43dj8/i_have_created_a_discussion_group_about_mcp_ai/
|
Mysterious-Yam2657
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43dj8
| false | null |
t3_1k43dj8
|
/r/LocalLLaMA/comments/1k43dj8/i_have_created_a_discussion_group_about_mcp_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8M_uH4QSy-e9QG7CkhHtTnwhDHMHGB_wtv4zTUOonqM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/SyAbJg3SOrCbmn49_34VOmdHiEeflGpL0YzYgFXPPCE.jpg?width=108&crop=smart&auto=webp&s=b48302b2693d3afc357b9959bfc96aaa9cd81ba2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/SyAbJg3SOrCbmn49_34VOmdHiEeflGpL0YzYgFXPPCE.jpg?width=216&crop=smart&auto=webp&s=96034bbf6824b1916f6888e169a572173faaab1a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/SyAbJg3SOrCbmn49_34VOmdHiEeflGpL0YzYgFXPPCE.jpg?width=320&crop=smart&auto=webp&s=9ed541769e49d98507d98bcd2177ef8e3f658496', 'width': 320}], 'source': {'height': 320, 'url': 'https://external-preview.redd.it/SyAbJg3SOrCbmn49_34VOmdHiEeflGpL0YzYgFXPPCE.jpg?auto=webp&s=7bfec681fa1dd3a9cff8250bb7f62f443a1cfe11', 'width': 320}, 'variants': {}}]}
|
Why are so many companies putting so much investment into free open source AI?
| 180 |
I dont understand alot of the big pictures for these companies, but considering how many open source options we have and how they will continue to get better. How will these companies like OpenAI or Google ever make back their investment?
Personally i have never had to stay subscribed to a company because there's so many free alternatives. Not to mention all these companies have really good free options of the best models.
Unless one starts screaming ahead of the rest in terms of performance what is their end goal?
Not that I'm complaining, just want to know.
| 2025-04-21T02:56:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k43g7a/why_are_so_many_companies_putting_so_much/
|
Business_Respect_910
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43g7a
| false | null |
t3_1k43g7a
|
/r/LocalLLaMA/comments/1k43g7a/why_are_so_many_companies_putting_so_much/
| false | false |
self
| 180 | null |
Hunyuan open-sourced InstantCharacter - image generator with character-preserving capabilities from input image
| 152 |
InstantCharacter is an innovative, tuning-free method designed to achieve character-preserving generation from a single image
One image + text → custom poses, styles & scenes
1️⃣ First framework to balance character consistency, image quality, & open-domain flexibility/generalization
2️⃣ Compatible with Flux, delivering high-fidelity, text-controllable results
3️⃣ Comparable to industry leaders like GPT-4o in precision & adaptability
Try it yourself on:
🔗Hugging Face Demo: https://huggingface.co/spaces/InstantX/InstantCharacter
Dive Deep into InstantCharacter:
🔗Project Page: https://instantcharacter.github.io/
🔗Code: https://github.com/Tencent/InstantCharacter
🔗Paper:https://arxiv.org/abs/2504.12395
| 2025-04-21T02:58:47 |
https://www.reddit.com/gallery/1k43htm
|
ResearchCrafty1804
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43htm
| false | null |
t3_1k43htm
|
/r/LocalLLaMA/comments/1k43htm/hunyuan_opensourced_instantcharacter_image/
| false | false | 152 | null |
|
128G AMD AI Max, context size?
| 2 |
If I got a 128G AMD AI Max machine, what can I expect for a context window with 70B model?
Is there a calculator online that gives a rough idea what you can run with different configurations?
| 2025-04-21T03:13:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k43rr0/128g_amd_ai_max_context_size/
|
MidnightProgrammer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43rr0
| false | null |
t3_1k43rr0
|
/r/LocalLLaMA/comments/1k43rr0/128g_amd_ai_max_context_size/
| false | false |
self
| 2 | null |
Character LLaMA-4
| 0 |
[https://geteai.org/](https://geteai.org/)
This runs on LLaMA-4 and automates a character system prompt.
| 2025-04-21T03:14:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k43sje/character_llama4/
|
Accurate-Biscotti609
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43sje
| false | null |
t3_1k43sje
|
/r/LocalLLaMA/comments/1k43sje/character_llama4/
| false | false |
self
| 0 | null |
Using KoboldCpp like its 1999 (noscript mode, Internet Explorer 6)
| 175 | 2025-04-21T03:21:32 |
https://v.redd.it/8hsjp4q1w3we1
|
HadesThrowaway
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k43x1h
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8hsjp4q1w3we1/DASHPlaylist.mpd?a=1747797713%2CNDcwZjc3Yzg2ZDk0YWNkYTFkMDQ1NDY2Zjk0ZTRiN2MzOGU5YmJkNDU3YTY3NDY4NDU0ZDY3NmRiYzUyYWQ4ZQ%3D%3D&v=1&f=sd', 'duration': 90, 'fallback_url': 'https://v.redd.it/8hsjp4q1w3we1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/8hsjp4q1w3we1/HLSPlaylist.m3u8?a=1747797713%2CNDBlNmFlZGViYTVkNTg3OGM5NDc3ZGFmNDllOWZmYTkxZTZlMTFkNzMzMzk2MTI4NzE3ODYzZDAxZWE5MTYwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8hsjp4q1w3we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}}
|
t3_1k43x1h
|
/r/LocalLLaMA/comments/1k43x1h/using_koboldcpp_like_its_1999_noscript_mode/
| false | false | 175 |
{'enabled': False, 'images': [{'id': 'ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK.png?width=108&crop=smart&format=pjpg&auto=webp&s=d54ca011787a22db4b35c7cf6fc0cd1faf850bb5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK.png?width=216&crop=smart&format=pjpg&auto=webp&s=3c2533edbfe0fe36bbf05a4e8bf5a3af2f8fdeab', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK.png?width=320&crop=smart&format=pjpg&auto=webp&s=9df30e50082018362bb9565562aa44c9ada84a33', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK.png?width=640&crop=smart&format=pjpg&auto=webp&s=ee0765ec9e115b57c95b9940622bd0afa55eeac9', 'width': 640}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/ZHd1MzQzZGp3M3dlMYSg_wSm3961EYqonF0X5c18rpErhfTomdHPrQd5DrBK.png?format=pjpg&auto=webp&s=8592ca12638accef292a31418270f277bba8d5e6', 'width': 768}, 'variants': {}}]}
|
||
Knowledge graph
| 7 |
I am learning how to build knowledge graphs. My current project is related to building a fishing knowledge graph from YouTube video transcripts. I am using neo4J to organize the triples and using Cypher to query.
I'd like to run everything locally. However by qwen 2.5 14b q6 cannot get the Cypher query just right. Chatgpt can do it right the first time. Obviously Chatgpt will get it right due to its size.
In knowledge graphs, is it common to use a LLM to generate the queries? I feel the 14b model doesn't have enough reasoning to generate the Cypher query.
Or can Python do this dynamically?
Or do you generate like 15 standard question templates and then use a back up method if a question falls outside of the 15?
What is the standard for building the Cypher queries?
Example of schema / relationships:
Each Strategy node connects to a Fish via USES_STRATEGY, and then has other relationships like:
:LOCATION_WHERE_CAUGHT -> (Location)
:TECHNIQUE -> (Technique)
:LURE -> (Lure)
:GEAR -> (Gear)
:SEASON -> (Season)
:BEHAVIOR -> (Behavior)
:TIP -> (Tip)
etc.
I usually want to answer natural questions like:
“How do I catch smallmouth bass?”
“Where can I find walleye?”
“What’s the best lure for white bass in the spring?"
Any advice is appreciated!
| 2025-04-21T03:28:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k441f6/knowledge_graph/
|
fgoricha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k441f6
| false | null |
t3_1k441f6
|
/r/LocalLLaMA/comments/1k441f6/knowledge_graph/
| false | false |
self
| 7 | null |
best local llm to run locally
| 30 |
hi, so having gotten myself a top notch computer ( at least for me), i wanted to get into llm's locally and was kinda dissapointed when i compared the answers quaIity having used gpt4.0 on openai. Im very conscious that their models were trained on hundreds of millions of hardware so obviously whatever i can run on my gpu will never match. What are some of the smartest models to run locally according to you guys?? I been messing around with lm studio but the models sems pretty incompetent. I'd like some suggestions of the better models i can run with my hardware.
Specs:
cpu: amd 9950x3d
ram: 96gb ddr5 6000
gpu: rtx 5090
the rest i dont think is important for this
Thanks
| 2025-04-21T03:51:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k44g1f/best_local_llm_to_run_locally/
|
Different-Put5878
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k44g1f
| false | null |
t3_1k44g1f
|
/r/LocalLLaMA/comments/1k44g1f/best_local_llm_to_run_locally/
| false | false |
self
| 30 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.