title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is there a custom watermarking tool?
| 0 |
Hello! I'm looking for an open-source watermarking tool that works with various media types, including images, videos, and audio.
I want to create a watermark that is not easily visible, difficult to remove, and remains intact even after modifications (similar to one from ElevenLabs). Additionally, only I should be able to detect the watermark using a specific key (or whatever), so it won’t trigger detection on typical "AI checkers" websites when applied to human-generated content (also would be nice if it won’t show that this was customly watermarked by that tool). Thanks!
| 2025-03-29T08:54:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmi56x/is_there_a_custom_watermarking_tool/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmi56x
| false | null |
t3_1jmi56x
|
/r/LocalLLaMA/comments/1jmi56x/is_there_a_custom_watermarking_tool/
| false | false |
self
| 0 | null |
Best UI/frontend for story/creative/general writing?
| 8 |
What I mean is not just prompting the LLM do do one thing and zero shot it, but like create drafts, edit in place, write extra, expand text, verbose, paraphrase and so on. Basically as if you were writing, but leaving the writing to the model. idk I think I'm poorly explaining it but imagine as if you had a code assistant in some IDE, but for creative writing instead of coding? Something like that or something similar, does it exist?
| 2025-03-29T08:59:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmi7hp/best_uifrontend_for_storycreativegeneral_writing/
|
Tripel_Meow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmi7hp
| false | null |
t3_1jmi7hp
|
/r/LocalLLaMA/comments/1jmi7hp/best_uifrontend_for_storycreativegeneral_writing/
| false | false |
self
| 8 | null |
Looking for advice on a £5000 ($6500) AI R&D PC build
| 1 |
[removed]
| 2025-03-29T09:00:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmi7qb/looking_for_advice_on_a_5000_6500_ai_rd_pc_build/
|
ParkingImpressive168
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmi7qb
| false | null |
t3_1jmi7qb
|
/r/LocalLLaMA/comments/1jmi7qb/looking_for_advice_on_a_5000_6500_ai_rd_pc_build/
| false | false |
self
| 1 | null |
Multiple Hypothesis and possible truth about simulation theory
| 1 | 2025-03-29T09:22:01 |
https://zenodo.org/records/15103485
|
Historical_Effort497
|
zenodo.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmii27
| false | null |
t3_1jmii27
|
/r/LocalLLaMA/comments/1jmii27/multiple_hypothesis_and_possible_truth_about/
| false | false |
default
| 1 | null |
|
Mining GPUs and LLMs, an extensive test, are they any use?
| 1 |
[removed]
| 2025-03-29T09:41:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmiqle/mining_gpus_and_llms_an_extensive_test_are_they/
|
gaspoweredcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmiqle
| false | null |
t3_1jmiqle
|
/r/LocalLLaMA/comments/1jmiqle/mining_gpus_and_llms_an_extensive_test_are_they/
| false | false |
self
| 1 | null |
MMTEB/MTEB: train set for pair classification
| 1 |
[removed]
| 2025-03-29T10:14:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmj6nl/mmtebmteb_train_set_for_pair_classification/
|
S4M22
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmj6nl
| false | null |
t3_1jmj6nl
|
/r/LocalLLaMA/comments/1jmj6nl/mmtebmteb_train_set_for_pair_classification/
| false | false |
self
| 1 | null |
Recommendations for Local ASR & TTS Models For My Native Language (Turkish) for YouTube Dubbing Project?
| 1 |
[removed]
| 2025-03-29T10:34:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmjg5e/recommendations_for_local_asr_tts_models_for_my/
|
bymechul
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmjg5e
| false | null |
t3_1jmjg5e
|
/r/LocalLLaMA/comments/1jmjg5e/recommendations_for_local_asr_tts_models_for_my/
| false | false |
self
| 1 | null |
Resumes and Job Description Dataset
| 1 |
[removed]
| 2025-03-29T10:51:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmjoua/resumes_and_job_description_dataset/
|
Infamous-Witness5409
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmjoua
| false | null |
t3_1jmjoua
|
/r/LocalLLaMA/comments/1jmjoua/resumes_and_job_description_dataset/
| false | false |
self
| 1 | null |
Finally someone's making a GPU with expandable memory!
| 549 |
It's a RISC-V gpu with SO-DIMM slots, but it's *something*!
https://www.servethehome.com/bolt-graphics-zeus-the-new-gpu-architecture-with-up-to-2-25tb-of-memory-and-800gbe/2/
https://bolt.graphics/
| 2025-03-29T10:54:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmjq5h/finally_someones_making_a_gpu_with_expandable/
|
Normal-Ad-7114
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmjq5h
| false | null |
t3_1jmjq5h
|
/r/LocalLLaMA/comments/1jmjq5h/finally_someones_making_a_gpu_with_expandable/
| false | false |
self
| 549 |
{'enabled': False, 'images': [{'id': 'Fv_whrOHQYYVTeKkVVgqXJAOMl_SJbleEILB2CrXIwI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=108&crop=smart&auto=webp&s=797bc5719f04294d7b87b6bc789cb7f772160eeb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=216&crop=smart&auto=webp&s=038fadf45940e659ef157816ac3d2b565a975c09', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=320&crop=smart&auto=webp&s=5ae1d23ad50f9c3701440fe67ea7146708b377f7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=640&crop=smart&auto=webp&s=c8805a91f181be30a12c51ef21d5a507bc86c7da', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=960&crop=smart&auto=webp&s=cbf6a76a3965c9c3a522fd4d9ab66fbd5b036544', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?width=1080&crop=smart&auto=webp&s=7437303b37867f156eac902ae1d192e2ef3dcdfa', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/Nn8bI5HWxvmzCFYd4ZspU-47jJrvm7EF8JnnqFqVwZs.jpg?auto=webp&s=9f918230ba7b78e583919c0f657daefbd29ab3a2', 'width': 1200}, 'variants': {}}]}
|
Alibaba's Qwen Team Releases QVQ-Max, A Visual Reasoning Model
| 1 |
Alibaba's Qwen team released QVQ-Max, a new visual reasoning model that goes beyond basic image recognition to analyze and reason about visual information across images and videos. This is Qwen’s third model release this week! Between Omni, Qwen2.5-VL, and now QVQ-Max, the Chinese powerhouse continues to crank out capable models across the AI spectrum.
The details:
• The model is an evolution of QVQ-72B-Preview, expanding capabilities across mathematical problem-solving, code generation, and creative tasks.
• QVQ-Max features a "thinking” mechanism that can be adjusted in length to improve accuracy, showing scalable gains as thinking time increases.
• Other complex visual capabilities shown include analyzing blueprints, solving geometry problems, and providing feedback on user-submitted sketches.
• Qwen said that future plans include creating a complete visual agent capable of operating devices and playing games.
Project Page: https://qwenlm.github.io/blog/qvq-max-preview/
Demo: https://huggingface.co/spaces/Qwen/QVQ-72B-preview
Model Weights: https://huggingface.co/Qwen/QVQ-72B-Preview
| 2025-03-29T11:03:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmjuz0/alibabas_qwen_team_releases_qvqmax_a_visual/
|
EssayHealthy5075
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmjuz0
| false | null |
t3_1jmjuz0
|
/r/LocalLLaMA/comments/1jmjuz0/alibabas_qwen_team_releases_qvqmax_a_visual/
| false | false |
self
| 1 | null |
Claude Kind of interface for Local coding
| 0 |
For rapid frontend react development I like the Claude chat frontend, as its runs, debugs and gives and renders the output immediately . Is there are similar interface for local development. I am assuming some sort of genetic workflow is powering that. Anything similar exist is open source, I can use.
| 2025-03-29T11:28:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmk8iy/claude_kind_of_interface_for_local_coding/
|
Guna1260
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmk8iy
| false | null |
t3_1jmk8iy
|
/r/LocalLLaMA/comments/1jmk8iy/claude_kind_of_interface_for_local_coding/
| false | false |
self
| 0 | null |
Nemotron-49B uses 70% less KV cache compare to source Llama-70B
| 118 |
While studying how much KV cache major models uses using formula and empirically running it with llama.cpp if possible, I found that the Nemotron models are not only 30% smaller in model size, KV cache is also 70% less. Overall, it is 38% VRAM saving if you run at 128k context.
This is because the non-self attention layers doesn't have any KV cache at all. For Nemotron-49B, 31 out of 80 layers are non-self attention. For 51B, 26 out of 80 layers.
So if you are into 128k context and have 48GB VRAM, Nemotron can run at Q5\_K\_M at 128k with unquantized KV cache. On the other hand, QwQ can only run at IQ3\_M due to 32GB KV cache.
[https://www.reddit.com/r/LocalLLaMA/comments/1jl33br/qwq32b\_has\_the\_highest\_kv\_cachemodel\_size\_ratio/](https://www.reddit.com/r/LocalLLaMA/comments/1jl33br/qwq32b_has_the_highest_kv_cachemodel_size_ratio/)
Other things I learned:
1. gemma-3 is pretty bad at KV cache while running with llama.cpp but this is because llama.cpp doesn't implement interleaved sliding window attention that can reduce KV cache to one sixth. (probably HF's transformers is the only one that support iSWA?)
2. Deepseek should make smaller MLA models that fit in 24GB or 48GB VRAM. This will blow the competition out of the water for local long context use.
| 2025-03-29T12:21:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jml2w8/nemotron49b_uses_70_less_kv_cache_compare_to/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jml2w8
| false | null |
t3_1jml2w8
|
/r/LocalLLaMA/comments/1jml2w8/nemotron49b_uses_70_less_kv_cache_compare_to/
| false | false |
self
| 118 | null |
Help
| 0 |
I need apps like chatterui to run llms i am using it now but i want to try another.
Aaand How can I make a specific model talk to itself without messaging it? I mean, give it a specific topic and make it discuss it with itself. Is there an application in which I can do that? And if there isn’t, how can I do that using llama.cpp?
| 2025-03-29T12:27:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jml6a3/help/
|
Fun-Property-5964
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jml6a3
| false | null |
t3_1jml6a3
|
/r/LocalLLaMA/comments/1jml6a3/help/
| false | false |
self
| 0 | null |
EduVoice AI powered
| 1 | 2025-03-29T12:43:13 |
https://www.reddit.com/gallery/1jmlg4b
|
MeasurementMinute388
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmlg4b
| false | null |
t3_1jmlg4b
|
/r/LocalLLaMA/comments/1jmlg4b/eduvoice_ai_powered/
| false | false | 1 | null |
||
I really dont know if this is the right forum for this but on LMSYS the spider model writes really good ( in my opinion ) anyone know what model it might be?
| 1 |
[removed]
| 2025-03-29T12:49:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmlkaz/i_really_dont_know_if_this_is_the_right_forum_for/
|
Advanced_Royal_3741
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmlkaz
| false | null |
t3_1jmlkaz
|
/r/LocalLLaMA/comments/1jmlkaz/i_really_dont_know_if_this_is_the_right_forum_for/
| false | false |
self
| 1 | null |
Qwen2.5-VL-7B-GGUF cannot be loaded with mmproj-f16 clip, but work standalone without mmproj.
| 1 |
[removed]
| 2025-03-29T12:51:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmll70/qwen25vl7bgguf_cannot_be_loaded_with_mmprojf16/
|
Remarkable-Pea645
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmll70
| false | null |
t3_1jmll70
|
/r/LocalLLaMA/comments/1jmll70/qwen25vl7bgguf_cannot_be_loaded_with_mmprojf16/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '5xLMpS7wdaUAGnA0dMh3zsMGueghvOTU0gc9D12kPqY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=108&crop=smart&auto=webp&s=bedd4e6eb979873f04fba27b40e240bb022e99c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=216&crop=smart&auto=webp&s=7adbc63e15fc5a58195048e818f116caf6502818', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=320&crop=smart&auto=webp&s=e16a822bbf906c766c1f96134990cfa31a9acbec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=640&crop=smart&auto=webp&s=119ac3601ba520a142d53a6b3f4e932c56d359ba', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=960&crop=smart&auto=webp&s=7305de46a61515c52fadc58c5b1de215a0f20dd9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?width=1080&crop=smart&auto=webp&s=f1b970f3f1ba2a6899c31b6cca009bb99707ce03', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aurn7MwsJHtKHuCf0rQfZPHaKf76PM905om7c7WuIf8.jpg?auto=webp&s=e50f0cf2cbde0cd4e297ef326e27ef094cab4632', 'width': 1200}, 'variants': {}}]}
|
Why is Falcon3-7b so rarely used (or cited) as a model?
| 57 |
It's a model that adheres well to prompting, its knowledge and responses are relevant, and it supports system/user/assistant prompts very well.
As a "small" model, I use it professionally in conjunction with the RAG system for chat.
I'd like your opinion on this model as well as the alternatives you use (<8b), Thank you
| 2025-03-29T12:56:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmlopu/why_is_falcon37b_so_rarely_used_or_cited_as_a/
|
Prudence-0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmlopu
| false | null |
t3_1jmlopu
|
/r/LocalLLaMA/comments/1jmlopu/why_is_falcon37b_so_rarely_used_or_cited_as_a/
| false | false |
self
| 57 | null |
Local model to HELP writing
| 1 |
[removed]
| 2025-03-29T13:12:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmlz9y/local_model_to_help_writing/
|
MrMorgan412
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmlz9y
| false | null |
t3_1jmlz9y
|
/r/LocalLLaMA/comments/1jmlz9y/local_model_to_help_writing/
| false | false |
self
| 1 | null |
Can't install Llama 3.2: 1B & 3B on WSL2
| 1 |
[removed]
| 2025-03-29T13:39:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmmhii/cant_install_llama_32_1b_3b_on_wsl2/
|
nikolikopikoziko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmmhii
| false | null |
t3_1jmmhii
|
/r/LocalLLaMA/comments/1jmmhii/cant_install_llama_32_1b_3b_on_wsl2/
| false | false |
self
| 1 | null |
RTX A6000 48GB for Local LLMs?
| 1 |
[removed]
| 2025-03-29T13:47:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmmnfn/rtx_a6000_48gb_for_local_llms/
|
Ok_Order7940
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmmnfn
| false | null |
t3_1jmmnfn
|
/r/LocalLLaMA/comments/1jmmnfn/rtx_a6000_48gb_for_local_llms/
| false | false |
self
| 1 | null |
Open Router models
| 0 |
I am a huge fan of open router - Love their openai api, and in general the huge model library they have.
I've been trying to test vision models - and while the text versions work exactly like they would if I host them locally (or with the original api providers - like mistral etc) - the vision for some reason does not. Here is an example:
Anyone have any idea what i am doing wrong?
https://preview.redd.it/c6vvxvr6vmre1.png?width=1842&format=png&auto=webp&s=86e02d34fc13390aaff445ce21ef4f472f0f6c07
https://preview.redd.it/2aedxf4avmre1.png?width=639&format=png&auto=webp&s=5ecf59c1325bef8a6e0cc2d2761b838cc66a8efd
Same question sent on hugging face:
https://preview.redd.it/1t5zxyacvmre1.png?width=1509&format=png&auto=webp&s=329c15af377af3b6c63445f1901800738b664210
| 2025-03-29T13:49:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmmokm/open_router_models/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmmokm
| false | null |
t3_1jmmokm
|
/r/LocalLLaMA/comments/1jmmokm/open_router_models/
| false | false | 0 | null |
|
Best way to run Local LLMs on a single NVIDIA GPU?
| 0 |
I’m sorry if that has been asked a gorillion times before, but I’m genuinely so confused right now. I started running llms using lmstudio and was generally happy with it, but then ive seen a comparison between it and Ollama that shows ollama being far superior in performance. I switched to Ollama and then saw a post on here where people were trashing it for terrible performance and were recommend stuff like vllm/sglang/aphrodite engine. I tried digging into those but the only benchmarks I could find were for multiple GPUs.
I have a 16GB RTX3080TI and my question is: What is the absolute way to run models on that single GPU and squeeze out as much performance from it as possible? I also want to connect my Mac to my pc to interact with models, so an API access would be nice
| 2025-03-29T14:10:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmn40y/best_way_to_run_local_llms_on_a_single_nvidia_gpu/
|
zzsmkr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmn40y
| false | null |
t3_1jmn40y
|
/r/LocalLLaMA/comments/1jmn40y/best_way_to_run_local_llms_on_a_single_nvidia_gpu/
| false | false |
self
| 0 | null |
Looking for advice on a £5000 ($6500) AI R&D PC build
| 1 |
[removed]
| 2025-03-29T14:48:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmnwie/looking_for_advice_on_a_5000_6500_ai_rd_pc_build/
|
Distinct_Bobcat_197
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmnwie
| false | null |
t3_1jmnwie
|
/r/LocalLLaMA/comments/1jmnwie/looking_for_advice_on_a_5000_6500_ai_rd_pc_build/
| false | false |
self
| 1 | null |
Cloud GPU suggestions for a privacy-conscious network engineer?
| 4 |
Been playing around with some local LLMs on my 1660 Super, but I need to step up my game for some real work while keeping my data private (because, you know, telling Claude about our network vulnerabilities probably isn't in the company handbook 💔).
I'm looking to rent a cloud GPU to run models like Gemma 3, DeepSeek R1, and DeepSeek V3 for:
- Generating network config files
- Coding assistance
- Summarizing internal docs
Budget: $100-200/month (planning to schedule on/off to save costs)
Questions:
1. Which cloud GPU providers have worked best for you?
2. Should I focus on specific specs beyond VRAM? (TFLOPs, CPU, etc.)
3. Any gotchas I should watch out for?
My poor 1660 Super is currently making sad GPU noises whenever I ask it to do anything beyond "hello world" with these models. Help a network engineer join the local LLM revolution!
Thanks in advance! 🙏
| 2025-03-29T15:06:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmoak6/cloud_gpu_suggestions_for_a_privacyconscious/
|
dathtd119
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmoak6
| false | null |
t3_1jmoak6
|
/r/LocalLLaMA/comments/1jmoak6/cloud_gpu_suggestions_for_a_privacyconscious/
| false | false |
self
| 4 | null |
Can someone ELI5 why there is not a readily available open source local tts program with high quality natural voices + gui interface (or ideally tell me there actually is and I missed it)?
| 1 |
[removed]
| 2025-03-29T15:12:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmoff7/can_someone_eli5_why_there_is_not_a_readily/
|
Visual-Custard821
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmoff7
| false | null |
t3_1jmoff7
|
/r/LocalLLaMA/comments/1jmoff7/can_someone_eli5_why_there_is_not_a_readily/
| false | false |
self
| 1 | null |
New to Local LLMs – What Can I Run on My Device? (4GB VRAM)
| 1 |
[removed]
| 2025-03-29T15:16:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmoi48/new_to_local_llms_what_can_i_run_on_my_device_4gb/
|
WhereIsMyEyes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmoi48
| false | null |
t3_1jmoi48
|
/r/LocalLLaMA/comments/1jmoi48/new_to_local_llms_what_can_i_run_on_my_device_4gb/
| false | false |
self
| 1 | null |
What is attention in terms of LLMs?? 😅
| 1 |
[removed]
| 2025-03-29T15:16:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmoijl/what_is_attention_in_terms_of_llms/
|
Other_Hand_slap
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmoijl
| false | null |
t3_1jmoijl
|
/r/LocalLLaMA/comments/1jmoijl/what_is_attention_in_terms_of_llms/
| false | false |
self
| 1 | null |
Getting started. Thinking about GPT4ALL
| 0 |
MacBook Pro user 24gb ram. Been playing with LM Studio but can’t figure out how to get the web interface to work. Nor am I bright enough to figure out how to interact with its server to start tweaking things. Installing the LLM was easy, they work with the built in chat tool. Is gpt4all a better option? I’m an ex-IT guy but that was a long time ago. Used to work with AS 3.0 but Flash has been dead a long time. Any suggestions are welcome, particularly a good local LLM for dummies type starter guide.
| 2025-03-29T15:47:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmp6ty/getting_started_thinking_about_gpt4all/
|
Zarnong
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmp6ty
| false | null |
t3_1jmp6ty
|
/r/LocalLLaMA/comments/1jmp6ty/getting_started_thinking_about_gpt4all/
| false | false |
self
| 0 | null |
SOTA 3d?
| 91 | 2025-03-29T16:03:22 |
https://huggingface.co/spaces/VAST-AI/TripoSG
|
Charuru
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmpjeu
| false | null |
t3_1jmpjeu
|
/r/LocalLLaMA/comments/1jmpjeu/sota_3d/
| false | false | 91 |
{'enabled': False, 'images': [{'id': 'i9qNvInIjxdvRbc_awSTD6MAUmvCJ560jSLq9OBAz0I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=108&crop=smart&auto=webp&s=a2629160305f79a8c7637bfe59db37aecaff178e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=216&crop=smart&auto=webp&s=842c41ccafd6e9491c4bccca48cba7d7297797d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=320&crop=smart&auto=webp&s=25fc0fd756de070057713ad8b7a4a02bb0104652', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=640&crop=smart&auto=webp&s=16974cbb9664a3f501eea6ca32995ed70308e190', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=960&crop=smart&auto=webp&s=f9228a41dc90d07c59db64f9a9162dd05f43be1b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?width=1080&crop=smart&auto=webp&s=d5261c42fa71197e77019eb160e431b310936886', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ErYaOL2J__P1a1nSZoN5VkFh-_pWwoLL-ogamC2v0BM.jpg?auto=webp&s=7a0d1106c52aea0bcf45cd3f35f6c590e735f41e', 'width': 1200}, 'variants': {}}]}
|
||
Is there an desktop app to connect ollama with?
| 0 |
Hey brothers and sisters, I don't mind openweb-ui, but I keep closing my browser and I rather have it as an separate app.
I am running ollama as a docker on my server.
| 2025-03-29T16:04:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmpjxf/is_there_an_desktop_app_to_connect_ollama_with/
|
Timziito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmpjxf
| false | null |
t3_1jmpjxf
|
/r/LocalLLaMA/comments/1jmpjxf/is_there_an_desktop_app_to_connect_ollama_with/
| false | false |
self
| 0 | null |
Fastest LLM platform for Qwen/Deepseek/LLama?
| 0 |
What's the current/fastest LLM platform for 3rd party hosted LLMs?
Ideally that supports structured outputs.
I've been really relying on structured outputs and I sort of form my code into RPCs now.
Works out really well.
The problem I'm having is that OpenAI has been just imploding lately and their SLA is pathetic.
They're down like 20-30% of the time.
Also, what libraries are you using to keep your code portable?
OpenRouter?
I want the same code to be able to target multiple LLMs and multiple providers.
Thanks in advance! You guys rock!
| 2025-03-29T16:13:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmprik/fastest_llm_platform_for_qwendeepseekllama/
|
brainhack3r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmprik
| false | null |
t3_1jmprik
|
/r/LocalLLaMA/comments/1jmprik/fastest_llm_platform_for_qwendeepseekllama/
| false | false |
self
| 0 | null |
What is the performance difference between 9070XT and 5070Ti when running LLMs?
| 0 |
ChatGPT says 5070Ti will be 2x faster
>The **5070 Ti can be 2× faster** than the 9070 XT in practical use when using CUDA-optimized backends like exllama.
||
||
|**RX 9070 XT (llama.cpp, ROCm backend)**|\~25–35 tokens/sec|
|**RTX 5070 Ti (exllama or vLLM)**|\~50–70 tokens/sec|
Is that true that the performance gap is that big? I just ordered a 9070XT but am reconsidering.
| 2025-03-29T16:25:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmq0xe/what_is_the_performance_difference_between_9070xt/
|
btpcn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmq0xe
| false | null |
t3_1jmq0xe
|
/r/LocalLLaMA/comments/1jmq0xe/what_is_the_performance_difference_between_9070xt/
| false | false |
self
| 0 | null |
What model would you recommend for Image Generation with an M4 Mac Mini (16GB)
| 1 |
Title basically, I’m really trying to get something done
Tried FLUX schnell but didn’t realize it’s 30GB+ non-quantized (my Mac has only 256GB)
If you wouldn’t recommend any would you recommend any API?
| 2025-03-29T16:29:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmq3sl/what_model_would_you_recommend_for_image/
|
MKU64
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmq3sl
| false | null |
t3_1jmq3sl
|
/r/LocalLLaMA/comments/1jmq3sl/what_model_would_you_recommend_for_image/
| false | false |
self
| 1 | null |
I want to run AI models locally on my computer without policy restrictions
| 0 |
I need:
1. A general-purpose AI that accurately understands human prompts and processes tasks efficiently.
2. A powerful code-generation model capable of writing unrestricted code, creating files, and integrating seamlessly with any application or code editor.
Both models should work in harmony, enabling AI-driven automation on my PC for coding and general tasks.
Can someone help set this up? Step-by-Step?
| 2025-03-29T16:30:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmq4q2/i_want_to_run_ai_models_locally_on_my_computer/
|
WittyVt31
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmq4q2
| false | null |
t3_1jmq4q2
|
/r/LocalLLaMA/comments/1jmq4q2/i_want_to_run_ai_models_locally_on_my_computer/
| false | false |
self
| 0 | null |
UPDATE: Tool Calling with DeepSeek-R1 on Amazon Bedrock!
| 0 |
I've updated my package repo with a new tutorial for tool calling support for DeepSeek-R1 671B on Amazon Bedrock via LangChain's ChatBedrockConverse class (successor to LangChain's ChatBedrock class).
Check out the updates here:
-> Python: https://github.com/leockl/tool-ahead-of-time (please update the package if you had previously installed it).
-> JavaScript/TypeScript: This was not implemented as there are currently some stability issues with Amazon Bedrock's DeepSeek-R1 API. See my GitHub repo for more details: https://github.com/leockl/tool-ahead-of-time-ts
With several new model releases the past week or so, DeepSeek-R1 is still the 𝐜𝐡𝐞𝐚𝐩𝐞𝐬𝐭 reasoning LLM on par with or just slightly lower in performance than OpenAI's o1 and o3-mini (high).
***If your platform or app is not offering an option to your customers to use DeepSeek-R1 then you are not doing the best by your customers by helping them to reduce cost!
BONUS: The newly released DeepSeek V3-0324 model is now also the 𝐜𝐡𝐞𝐚𝐩𝐞𝐬𝐭 best performing non-reasoning LLM. 𝐓𝐢𝐩: DeepSeek V3-0324 already has tool calling support provided by the DeepSeek team via LangChain's ChatOpenAI class.
| 2025-03-29T16:42:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmqe5q/update_tool_calling_with_deepseekr1_on_amazon/
|
lc19-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmqe5q
| false | null |
t3_1jmqe5q
|
/r/LocalLLaMA/comments/1jmqe5q/update_tool_calling_with_deepseekr1_on_amazon/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '0-rllnIf3Q61q92ORgUszLaibrIETgFTRKZIzJT01cU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=108&crop=smart&auto=webp&s=43136d93a9cd7cb9e47f64e4d6c3d3a9a513955d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=216&crop=smart&auto=webp&s=3fa613467d8392ca369834d02df1ca0a16678da2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=320&crop=smart&auto=webp&s=2bb0da43bac53b43537dfa449935d7d78dbdd877', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=640&crop=smart&auto=webp&s=234e2bac7cf3005722f281ae0c25db740b952c4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=960&crop=smart&auto=webp&s=70603f3093918f57e3798a5e88d075d640d22a7b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?width=1080&crop=smart&auto=webp&s=63ab70f00abcaab9835b24dfc6f4dfc57574c223', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K7yMJgnEHoghFBxV9wBO0a0pjn5WwWKaEFsSFa7t9cI.jpg?auto=webp&s=9a8ec2570c9f4e66de3b1d390bde1451fa544ca5', 'width': 1200}, 'variants': {}}]}
|
I Made a simple online tokenizer for any Hugging Face model
| 43 |
Hey everyone,
When I'm experimenting with different open models from Hugging Face, I often want to know how many tokens my prompts or texts actually are *for that specific model's tokenizer*. It felt clunky to do this locally every time, and online tools seemed non-existent apart from OpenAI's tokenizer.
So I built a little web tool to help with this: **Tokiwi** \-> [https://tokiwi.dev](https://tokiwi.dev)
You just paste text and give it any HF repo ID (like `google/gemma-3-27b-it`, `deepseek-ai/DeepSeek-V3-0324`, your own fine-tune if it's public, etc.) and it shows the token count and the tokens themselves. It can also handle gated models if you give it an HF access token.
Wondering if this might be useful to others here. Let me know what you think! Any feedback is appreciated.
Thank you for your time!
| 2025-03-29T16:50:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmqlii/i_made_a_simple_online_tokenizer_for_any_hugging/
|
Tweed_Beetle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmqlii
| false | null |
t3_1jmqlii
|
/r/LocalLLaMA/comments/1jmqlii/i_made_a_simple_online_tokenizer_for_any_hugging/
| false | false |
self
| 43 | null |
First time testing: Qwen2.5:72b -> Ollama Mac + open-webUI -> M3 Ultra 512 gb
| 173 |
First time using it. Tested with the qwen2.5:72b, I add in the gallery the results of the first run. I would appreciate any comment that could help me to improve it. I also, want to thanks the community for the patience answering some doubts I had before buying this machine. I'm just beginning.
Doggo is just a plus!
| 2025-03-29T16:57:54 |
https://www.reddit.com/gallery/1jmqqxz
|
Turbulent_Pin7635
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmqqxz
| false | null |
t3_1jmqqxz
|
/r/LocalLLaMA/comments/1jmqqxz/first_time_testing_qwen2572b_ollama_mac_openwebui/
| false | false | 173 | null |
|
Benchmark for no comment editing
| 0 |
There should be a benchmark that consists of code that explicitly says not to edit the comments.
I know this sounds silly, but in production some of these models are too embarrassing to use. Because using them means your PRs are full of stupid little comment updates even though one little tweak is being done.
Why is this the trend with the sota models?
| 2025-03-29T17:08:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmqzu2/benchmark_for_no_comment_editing/
|
medialoungeguy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmqzu2
| false | null |
t3_1jmqzu2
|
/r/LocalLLaMA/comments/1jmqzu2/benchmark_for_no_comment_editing/
| false | false |
self
| 0 | null |
My local llm isnt unrestricted or unfiltered
| 1 |
[removed]
| 2025-03-29T17:12:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmr387/my_local_llm_isnt_unrestricted_or_unfiltered/
|
Opening-Grocery-1048
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmr387
| false | null |
t3_1jmr387
|
/r/LocalLLaMA/comments/1jmr387/my_local_llm_isnt_unrestricted_or_unfiltered/
| false | false |
nsfw
| 1 | null |
How The ChatGPT Voice and Video Mode Perform So Well
| 3 |
I’ve seen this reference in the group a couple of times but honestly I’m surprised it hasn’t been brought up more.
I use ChatGPT’s enhanced voice mode almost daily. I hardly listen to music anymore when I drive (unless Bluey Soundtrack with my son). Instead, I use voice mode as my brainstorming/concept understanding sessions. I will ask about concepts I am struggling to understand or ask about how certain concepts are implemented.
Example:
Kyle,
What are some industry standard tools and techniques for secure development lifecycle management?
What are the best ways to secure applications using CloudFlare?
What database platforms are best suited for real time data vs historical data and list some paid vs open source?
This has been incredibly valuable for my own projects and just understanding things “out of band”. It’s another way I can casually digest information without having to read, It’s just a casual conversation with Kyle who is much smarter than I am. But, I want to use this outside OpenAI.
I started asking myself how does ChatGPT seem to be streaming my long winded questions and conversations, but Kyle relies almost immediately as if there was no processing or inference time? Once I looked it up it made complete sense and thought it was such a genius idea!
OpenAI is using WebRTC to stream your audio and video content to and from the LLM. This is why you can interrupt him mid sentence and he will notice right away, why the responses are so immediate, and why there appears to be no processing/inference time.
Many of you may have already known this, but I was oblivious to it and found it such a cool way to use WebRTC which is typically reserved for things like video conferencing (zoom, Teams, etc). They use a company called LiveKit which has some amazing resources and a great SDK to get up and running on a free account fast. Check them out, they are doing a lot of really cool stuff!
Anyways, just sharing in case anyone was oblivious as I was.
https://livekit.io
https://github.com/livekit
| 2025-03-29T17:22:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmraxr/how_the_chatgpt_voice_and_video_mode_perform_so/
|
Cipher_Lock_20
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmraxr
| false | null |
t3_1jmraxr
|
/r/LocalLLaMA/comments/1jmraxr/how_the_chatgpt_voice_and_video_mode_perform_so/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'SxeH_piQLluOjoc9Boa_eidtk_S9QgrJXlC_EG09kXY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=108&crop=smart&auto=webp&s=37987e5a779cf94dcf00f1b115403fd288dda157', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=216&crop=smart&auto=webp&s=d422b78709805c18b0e00db173347ab2ed7adba5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=320&crop=smart&auto=webp&s=b13b53b5a33ee45df10941b0ddaac967505d5f2d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=640&crop=smart&auto=webp&s=07b7206b412732ccdaca84b25f4d9494be886c3a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=960&crop=smart&auto=webp&s=66290e26f0679935ea3f2116d55640c05ab59b6e', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?width=1080&crop=smart&auto=webp&s=8d095ae403c0b86b6eb85bd36b77834db20cd8a3', 'width': 1080}], 'source': {'height': 676, 'url': 'https://external-preview.redd.it/NoqzPRaoPxZ4uMlG1riAoS2Er8oJu-21abeqGf_ZQwQ.jpg?auto=webp&s=0b32d908471745d87f6f3f590be403df6234acca', 'width': 1200}, 'variants': {}}]}
|
How do you run larger models on GPU services with a chat interface?
| 1 |
[removed]
| 2025-03-29T17:28:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmrge2/how_do_you_run_larger_models_on_gpu_services_with/
|
durian34543336
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmrge2
| false | null |
t3_1jmrge2
|
/r/LocalLLaMA/comments/1jmrge2/how_do_you_run_larger_models_on_gpu_services_with/
| false | false |
self
| 1 | null |
[Build] A Beautiful Contradiction
| 38 |
Sharing my absolute contradiction of a local LLM rig - I found a 2019 Mac Pro outer shell for sale on eBay for $250 and wanted room to upsize my ITX build so I said fuck it and thus, a monstrosity was born.
Specs in the comments, hate welcomed 🙏
| 2025-03-29T17:46:01 |
https://v.redd.it/irhtg3os1ore1
|
taylorwilsdon
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmruim
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/irhtg3os1ore1/DASHPlaylist.mpd?a=1745862373%2CMDE1MjgzODE3YjVkZDA2NTc4YmVmZjNiOWVmZDAzNDRiYjFhMWY1OTI1Y2Q5OWU4YzY5ODc4MjkwMTQ2N2E0ZA%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/irhtg3os1ore1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/irhtg3os1ore1/HLSPlaylist.m3u8?a=1745862373%2CNTZmMzEwMzkxMGFkNTYzMjliOTBmY2Y0MjhiODg2ZjY5ZTk4NzRjYzQ4OGI0OGZhYjI0YWVhYzIxOWVkNDkwNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/irhtg3os1ore1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1jmruim
|
/r/LocalLLaMA/comments/1jmruim/build_a_beautiful_contradiction/
| false | false | 38 |
{'enabled': False, 'images': [{'id': 'cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=108&crop=smart&format=pjpg&auto=webp&s=5f8307423d37d332b88da4c61f7bb8f433bdf0a1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=216&crop=smart&format=pjpg&auto=webp&s=7dddbc741f916cff23cca318fbd7945dc80e0859', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=320&crop=smart&format=pjpg&auto=webp&s=7cb9bcbe13617b60315e13687aafe5183f280703', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=640&crop=smart&format=pjpg&auto=webp&s=897ced71c7af7d99d0f269d39319c294a4a0c51a', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=960&crop=smart&format=pjpg&auto=webp&s=5d9aa18ae82dfa0d9cae076660511bb7e3ed8b10', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?width=1080&crop=smart&format=pjpg&auto=webp&s=135d8dc5aa9f1b34ef84556b45bd9b685f9be5ef', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/cjdycmFoa3Mxb3JlMXAQIb2sMeAVglK21tTmFYitWWLXfsVRBH8Hkw8Jz_5k.png?format=pjpg&auto=webp&s=5e1ffc9ea0448bfecb9509444f9d88f7d7cb999b', 'width': 1080}, 'variants': {}}]}
|
|
Any non reasoning 32b model comparable to QWQ?
| 3 |
If there is any good one I’d like to know. Thanks
| 2025-03-29T17:48:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmrwmc/any_non_reasoning_32b_model_comparable_to_qwq/
|
No_Expert1801
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmrwmc
| false | null |
t3_1jmrwmc
|
/r/LocalLLaMA/comments/1jmrwmc/any_non_reasoning_32b_model_comparable_to_qwq/
| false | false |
self
| 3 | null |
AMD v340
| 1 |
[removed]
| 2025-03-29T17:59:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jms5j8/amd_v340/
|
Ok_Individual_2717
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jms5j8
| false | null |
t3_1jms5j8
|
/r/LocalLLaMA/comments/1jms5j8/amd_v340/
| false | false |
self
| 1 | null |
4x3090
| 488 |
Is the only benefit of multiple GPUs concurrency of requests? I have 4x3090 but still seem limited to small models because it needs to fit in 24G vram.
AMD threadripper pro 5965wx 128 PCIe lanes
ASUS ws pro wrx80
256G ddr4 3200 8 channels
Primary PSU Corsair i1600 watt
Secondary PSU 750watt
4 gigabyte 3090 turbos
Phanteks Enthoo Pro II case
Noctua industrial fans
Artic cpu cooler
I am using vllm with tensor parallism of 4.
I see all 4 cards loaded up and utilized evenly but doesn't seem any faster than 2 GPUs.
Currently using Qwen/Qwen2.5-14B-Instruct-AWQ with good success paired with Cline.
Will a nvlink bridge help?
How can I run larger models?
14b seems really dumb compared to Anthropic.
| 2025-03-29T19:02:48 |
zetan2600
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmtkgo
| false | null |
t3_1jmtkgo
|
/r/LocalLLaMA/comments/1jmtkgo/4x3090/
| false | false | 488 |
{'enabled': True, 'images': [{'id': 'NeVhoptlAr9wWi7iM4ScvRGRxTT5DPjcxMjnyVg3xzA', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=108&crop=smart&auto=webp&s=d668b1d70c33e1a91ac446d32c4083f5d6853e6c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=216&crop=smart&auto=webp&s=68515540bb37d018d380a34b8919bbf2d320cc4b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=320&crop=smart&auto=webp&s=f37594b10bdde80d102bf61d7ea1420bdc26f02f', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=640&crop=smart&auto=webp&s=1eaa2ef7723a30f4134fa44b42f76a17aa5ba357', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=960&crop=smart&auto=webp&s=87389446342ceee4137477fcc0a7d649e40d1e20', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?width=1080&crop=smart&auto=webp&s=13735d91006f5ff7580de50616d4c8537d14c303', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/zi8ghi2ifore1.jpeg?auto=webp&s=5b1a07aaa8a988d9c3ac90e3f128553591a24e73', 'width': 4096}, 'variants': {}}]}
|
||
Seen a lot of setups but I had to laugh at this one. Price isn't terrible but with how it looks to be maintained I'd be worried about springing a leak.
| 179 | 2025-03-29T19:13:59 |
sleepy_roger
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmttah
| false | null |
t3_1jmttah
|
/r/LocalLLaMA/comments/1jmttah/seen_a_lot_of_setups_but_i_had_to_laugh_at_this/
| false | false |
nsfw
| 179 |
{'enabled': True, 'images': [{'id': 'xf_wvd3fZnR6GoL75_f6rR5nRfvDl44esijQ9rcLzYs', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=108&crop=smart&auto=webp&s=27d01845648cac2d3a837d21178eef6ab5faebdc', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=216&crop=smart&auto=webp&s=af483e58ff3cf0b25f9d1ab76b36d4fce116a7c5', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=320&crop=smart&auto=webp&s=8d4f63cbd7ad4825e373e1de8257c83f9e10dd1b', 'width': 320}, {'height': 552, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=640&crop=smart&auto=webp&s=6b3c1341e0fe9f4aa4d4425231c75b716c17127e', 'width': 640}, {'height': 828, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=960&crop=smart&auto=webp&s=a6e1300797d0dd2f62ae2c8b2dc574e41b4c3b13', 'width': 960}, {'height': 932, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=1080&crop=smart&auto=webp&s=01d5f7f83fe130793aeffaab0d7e3d08e1f893e1', 'width': 1080}], 'source': {'height': 1361, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?auto=webp&s=7c68934a938f80cd70f333ef08fe2934b73ed4fe', 'width': 1577}, 'variants': {'nsfw': {'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d0c78b280cccb63811ae86da6532c1fd0c97ee9a', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=249fea738861426f74aa3f9ff0bd46fcfa4362f9', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=92fac2260ddf549422ed81e142acab78b2c97234', 'width': 320}, {'height': 552, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a6fb39e9a44329be0e23e75c2d60c0bfd7772827', 'width': 640}, {'height': 828, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=5fc03588a4bd69adf4b4d0ee4b0c8b6f30f777ca', 'width': 960}, {'height': 932, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=d3061e460043024d85d665ce12cbd050c75daa8b', 'width': 1080}], 'source': {'height': 1361, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?blur=40&format=pjpg&auto=webp&s=596213adc129da988f06e83b50b5d905cd747c0f', 'width': 1577}}, 'obfuscated': {'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d0c78b280cccb63811ae86da6532c1fd0c97ee9a', 'width': 108}, {'height': 186, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=249fea738861426f74aa3f9ff0bd46fcfa4362f9', 'width': 216}, {'height': 276, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=92fac2260ddf549422ed81e142acab78b2c97234', 'width': 320}, {'height': 552, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a6fb39e9a44329be0e23e75c2d60c0bfd7772827', 'width': 640}, {'height': 828, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=5fc03588a4bd69adf4b4d0ee4b0c8b6f30f777ca', 'width': 960}, {'height': 932, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=d3061e460043024d85d665ce12cbd050c75daa8b', 'width': 1080}], 'source': {'height': 1361, 'url': 'https://preview.redd.it/rvhj7wnchore1.png?blur=40&format=pjpg&auto=webp&s=596213adc129da988f06e83b50b5d905cd747c0f', 'width': 1577}}}}]}
|
||
Curie: Open-Source AI for Rigorous ML Research
| 1 |
[removed]
| 2025-03-29T19:42:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmufkd/curie_opensource_ai_for_rigorous_ml_research/
|
Pleasant-Type2044
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmufkd
| false | null |
t3_1jmufkd
|
/r/LocalLLaMA/comments/1jmufkd/curie_opensource_ai_for_rigorous_ml_research/
| false | false |
self
| 1 | null |
Which open source LLM to run locally on MacOS to generate Ghiblis style of images?
| 0 |
Basically the questions mentioned in the title.
My favourite local LLMs for coding purposes are
1. Qwen 2.5 (any B parameter, 32B one strikes the correct balance)
2. Deepseek r1 (distilled)
I am also on the journey to make some kind of Agent or agentic flow to see if I can sell something.
Learning Langgrpah for the same.
But recently I've started looking at the audio creation open-source models too. So I think there is a different kind of market that can be captured, by creating agents there.
And for now moved to this image generation models.
I've macbooks with m3 pro and m3 max with 36GB RAM. So please suggest.
| 2025-03-29T19:55:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmupsd/which_open_source_llm_to_run_locally_on_macos_to/
|
tapu_buoy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmupsd
| false | null |
t3_1jmupsd
|
/r/LocalLLaMA/comments/1jmupsd/which_open_source_llm_to_run_locally_on_macos_to/
| false | false |
self
| 0 | null |
can anyone tell me if mcp is possible for local inference or not??
| 0 | 2025-03-29T20:05:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmuxxe/can_anyone_tell_me_if_mcp_is_possible_for_local/
|
TheLogiqueViper
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmuxxe
| false | null |
t3_1jmuxxe
|
/r/LocalLLaMA/comments/1jmuxxe/can_anyone_tell_me_if_mcp_is_possible_for_local/
| false | false | 0 | null |
||
Open source alternative to Claude Code
| 1 |
[removed]
| 2025-03-29T20:07:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmuzl7/open_source_alternative_to_claude_code/
|
itzco1993
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmuzl7
| false | null |
t3_1jmuzl7
|
/r/LocalLLaMA/comments/1jmuzl7/open_source_alternative_to_claude_code/
| false | false |
self
| 1 | null |
Advice on Xeon 4th Gen Engineering Sample Build
| 7 |
BLUF: For a budget of $5,000, I think that a Xeon ES build would be cool / set me up for future LLM use with [ktransformers](https://github.com/kvcache-ai/ktransformers), but I would like advice
I have a grant that needs parallel CPU time (calculating satellite ephemera), and I could spend \~$5,000 for hardware that I could then keep. I'd like to try using it for LLMs and other homelabbing things. I was looking at older Epycs, but I'm leaning towards the 4th Gen ES route 1) for the PCIe Gen 5 slots, 2) investing in DDR5 (more usable in the future), and 3) it would be cool to tell people you built a rig from engineering samples from China. So, I'm looking at bundles like [this one](https://www.ebay.com/itm/226650685926?_skw=intel+xeon+platinum+8490h&itmmeta=01JQHRJY3G6S8VM7QXNA1K84TP&hash=item34c56f01e6:g:KsYAAOSwFAplqfZ0&itmprp=enc%3AAQAKAAAA4FkggFvd1GGDu0w3yXCmi1esft53RRWrKCnvBg9343XtE42Zqhqewa0TPWQvOc5zfu03cWtqhmp2gigptDN4cU82LI%2B%2FFd7cURAxicl5Y7JOuOchQgL5OROTbQM8g4lsxp7CRpT%2FXaNl%2FoQEmc2M7d9niQCuTx3gmzfdEq9tN%2B8Rfa7chsUv5%2F0siJuDMTYKZoZFrdvUbLd7k6vnAITDH4BfvHdlN%2BkySxy3al63QIbj2Pue28%2BX7KHhJncVak77E9o2jognJF%2Bvz4dV6w6CVNzY1xjphgybafRMcKpKP26D%7Ctkp%3ABk9SR_Lhy7i8ZQ), that would include:
* 8490H-ish Xeon 4th Gen (QYFX ES)
* GIGABYTE MS33-AR0
* 512gb DDR5 4800 in 8x64gb RAM
* Nvmes, PSU, tower, etc. bought in the U.S.
I could add in some of my own money to get a dual-socket, but after reading [this discussion](https://www.reddit.com/r/LocalLLaMA/comments/1izu62f/9654_vs_9175f_vs_xeon_4th_gen_with_amx_support/) and looking at benchmarks (comparing the same CPU on [single socket](https://www.cpubenchmark.net/server.html) vs. [two sockets](https://www.cpubenchmark.net/multi_cpu.html)) it doesn't seem worth the headache and the extra money for the mobo, RAM, and cpu. The "8490H" ES for dual socket also seems to be base 1.6 vs base 1.7 Ghz. I also could buy the mobo separately in the U.S. for cheaper, but I'm not sure I'd want to risk incompatibility.
If anyone has any input, I would appreciate any thoughts. And if anyone in New England wants to get together for the build, I'd be glad to have company!
| 2025-03-29T20:34:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmvk5a/advice_on_xeon_4th_gen_engineering_sample_build/
|
TrackActive841
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmvk5a
| false | null |
t3_1jmvk5a
|
/r/LocalLLaMA/comments/1jmvk5a/advice_on_xeon_4th_gen_engineering_sample_build/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'Dqgt6mCVAAyK5YDe1SxRcsiQ9CYDgS2pSpJnt7xrO5Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=108&crop=smart&auto=webp&s=c1051d2a34c5cbeb30216a2043a7545c738b9f55', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=216&crop=smart&auto=webp&s=459ae6979d034e67469be253620af04564bb766a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=320&crop=smart&auto=webp&s=e89096189ad46f383dbab4bcc62d715d79c04770', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=640&crop=smart&auto=webp&s=7f0c07d84564d4ea1ecd5d566855e70fb3bcbcb6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=960&crop=smart&auto=webp&s=68ebaf590b9e33f18aae7a8567c97a9804a4c514', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?width=1080&crop=smart&auto=webp&s=6287883db00b41097a09f4d6145ed3febdbb1d5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/R1iJL6hluRN0LDpwD9va5zadDRJefrm7HJKSo9RrA7I.jpg?auto=webp&s=54f245e22dbb94cb42cced846e1800cc8b1c5b2d', 'width': 1200}, 'variants': {}}]}
|
Local, GPU-Accelerated AI Characters with C#, ONNX & Your LLM (Speech-to-Speech)
| 1 |
[removed]
| 2025-03-29T20:42:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmvr14/local_gpuaccelerated_ai_characters_with_c_onnx/
|
fagenorn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmvr14
| false | null |
t3_1jmvr14
|
/r/LocalLLaMA/comments/1jmvr14/local_gpuaccelerated_ai_characters_with_c_onnx/
| false | false |
self
| 1 | null |
Local, GPU-Accelerated AI Characters with C#, ONNX & Your LLM (Speech-to-Speech)
| 80 |
Sharing **Persona Engine**, an open-source project I built for creating interactive AI characters. Think VTuber tech meets your local AI stack.
**What it does:**
* **Voice Input:** Listens via mic (Whisper.net ASR).
* **Your LLM:** Connects to any **OpenAI-compatible API** (perfect for Ollama, LM Studio, etc., via LiteLLM perhaps). Personality defined in personality.txt.
* **Voice Output:** Advanced TTS pipeline + optional **Real-time Voice Cloning (RVC)**.
* **Live2D Avatar:** Animates your character.
* **Spout Output:** Direct feed to OBS/streaming software.
**The Tech Deep Dive:**
* **Everything Runs Locally:** The ASR, TTS, RVC, and rendering are all done on your machine. Point it at your local LLM, and the whole loop stays offline.
* C# **Powered:** The entire engine is built in **C# on .NET 9**. This involved rewriting a lot of common Python AI tooling/pipelines, but gives us great performance and lovely async/await patterns for managing all the concurrent tasks (listening, thinking, speaking, rendering).
* **ONNX Runtime Under the Hood:** I leverage ONNX for the AI models (Whisper, TTS components, RVC). **Theoretically,** this means it could target different execution providers (DirectML for AMD/Intel, CoreML, CPU). **However,** the current build and included dependencies are optimized and primarily tested for **NVIDIA CUDA/cuDNN** for maximum performance, especially with RVC. Getting other backends working would require compiling/sourcing the appropriate ONNX Runtime builds and potentially some code adjustments.
* **Cross-Platform Potential:** Being C#/.NET means it could run on Linux/macOS, but you'd need to handle platform-specific native dependencies (like PortAudio, Spout alternatives e.g., Syphon) and compile things yourself. Windows is the main supported platform right now via the releases.
**GitHub Repo (Code & Releases):** [https://github.com/fagenorn/handcrafted-persona-engine](https://github.com/fagenorn/handcrafted-persona-engine)
**Short Demo Video:** [https://www.youtube.com/watch?v=4V2DgI7OtHE](https://www.youtube.com/watch?v=4V2DgI7OtHE) (forgive the cheesiness, I was having a bit of fun with capcut)
**Quick Heads-up:**
* For the pre-built releases: **Requires NVIDIA GPU + correctly installed CUDA/cuDNN** for good performance. The README has a detailed guide for this.
* Configure appsettings.json with your LLM endpoint/model.
* Using standard LLMs? Grab personality\_example.txt from the repo root as a starting point for personality.txt (requires prompt tuning!).
Excited to share this with a community that appreciates running things locally and diving into the tech! Let me know what you think or if you give it a spin. 😊
| 2025-03-29T20:44:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmvsm3/local_gpuaccelerated_ai_characters_with_c_onnx/
|
fagenorn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmvsm3
| false | null |
t3_1jmvsm3
|
/r/LocalLLaMA/comments/1jmvsm3/local_gpuaccelerated_ai_characters_with_c_onnx/
| false | false |
self
| 80 |
{'enabled': False, 'images': [{'id': 'PgbO2fOv88hw53xhZj6hOcVji_WcR_zqCudZLV6JyFU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=108&crop=smart&auto=webp&s=ae24a666c0085e5f4552f08d166996b082656aed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=216&crop=smart&auto=webp&s=72a84c5777ba1c92aa803e58d59d3e208dabd4a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=320&crop=smart&auto=webp&s=d1f0f83329536012e4e8b48c4dd4ca9d28948d8c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=640&crop=smart&auto=webp&s=6c85601b5f300ea1876a22c773c2b988b929b8e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=960&crop=smart&auto=webp&s=785e28d71f91e5d8f6f599819a390fffcfb97314', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?width=1080&crop=smart&auto=webp&s=acc11b50326e28f5534ec748eed842ec3ef814b1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/yANHp866IPdcdVlDVIUxKaadWZ--5yaoWdiJlmsaMAE.jpg?auto=webp&s=51be477c8dfd687b99f2f2e31ba20f23a0c6657d', 'width': 1280}, 'variants': {}}]}
|
GMKtec announces imminent availability of Strix Halo EVO-X2 mini PC
| 27 | 2025-03-29T20:50:00 |
https://www.notebookcheck.net/GMKtec-announces-imminent-availability-of-Strix-Halo-EVO-X2-mini-PC.989734.0.html
|
fairydreaming
|
notebookcheck.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmvwi5
| false | null |
t3_1jmvwi5
|
/r/LocalLLaMA/comments/1jmvwi5/gmktec_announces_imminent_availability_of_strix/
| false | false |
default
| 27 | null |
|
Video with some of the tasks in ARC-AGI-2, contains spoilers
| 15 | 2025-03-29T20:50:12 |
https://www.youtube.com/watch?v=3ki7oWI18I4
|
neoneye2
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmvwnx
| false |
{'oembed': {'author_name': 'Simon Strandgaard', 'author_url': 'https://www.youtube.com/@simonstrandgaard5503', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/3ki7oWI18I4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ARC-AGI-2 task speedrun with spoilers #ARCAGI #iqtest #ARCPrize"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/3ki7oWI18I4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'ARC-AGI-2 task speedrun with spoilers #ARCAGI #iqtest #ARCPrize', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jmvwnx
|
/r/LocalLLaMA/comments/1jmvwnx/video_with_some_of_the_tasks_in_arcagi2_contains/
| true | false |
spoiler
| 15 |
{'enabled': False, 'images': [{'id': 'SMV5aNZO_9Qxk_djfqR5JC06XIjaieMwpXFyK0-Bvfg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=108&crop=smart&auto=webp&s=2fcfee744656db35e7988a8fc1ba336f8ceb2c7e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=216&crop=smart&auto=webp&s=663911f7b6eb190ffe7411cef618703eab18c544', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=320&crop=smart&auto=webp&s=3171bdd604d6700bf8489d1c19389a576272e037', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?auto=webp&s=d6890e374a2dcdde38bbe610a402f337c53da57b', 'width': 480}, 'variants': {'obfuscated': {'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=0a038ad45db9aa897b91791f0f5ecdc1459b7b8e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=52192e40ba5b3a7a4fd4b4b70c1c1a7beaa68f6c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a431bcf73d85d3251e11ca073a19c4da7bd94f33', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/2WLGAwDIufnt1nKZHeJHpJvJ0V2ZWo7xHbvgpv-iViA.jpg?blur=40&format=pjpg&auto=webp&s=34410b9573bed78ebafd14ed988c1f272f331d74', 'width': 480}}}}]}
|
|
Someone created a highly optimized RDNA3 kernel that outperforms RocBlas by 60% on 7900XTX. How can I implement this and would it significantly benefit LLM inference?
| 150 | 2025-03-29T21:41:45 |
https://seb-v.github.io/optimization/update/2025/01/20/Fast-GPU-Matrix-multiplication.html
|
Thrumpwart
|
seb-v.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmx0ih
| false | null |
t3_1jmx0ih
|
/r/LocalLLaMA/comments/1jmx0ih/someone_created_a_highly_optimized_rdna3_kernel/
| false | false | 150 |
{'enabled': False, 'images': [{'id': 'pdtqiXajX7QVkzPNOS6QWCqZ-wqimEbHkURA3IVbYFg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?width=108&crop=smart&auto=webp&s=cc2d26608a9e037b127d75d08765c9f92f1e9105', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?width=216&crop=smart&auto=webp&s=15d65431bc0b529cbc6efea6d77ba017db662ca3', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?width=320&crop=smart&auto=webp&s=0f51415b0bdbdc7b06a231eb9bc7b602248012c5', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?width=640&crop=smart&auto=webp&s=d50ef4e8c8eec3ec397e0751d55c871986cab02e', 'width': 640}, {'height': 488, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?width=960&crop=smart&auto=webp&s=98e6af3cffd1924231f4c9f4f135907f33c84143', 'width': 960}], 'source': {'height': 539, 'url': 'https://external-preview.redd.it/1DvBQgPBbFWMlcok52huGfBv7vgJ1oQojIIBOC8IpDA.jpg?auto=webp&s=2bb71c2f0344ebb22fad0cd568390c8e0cc22520', 'width': 1060}, 'variants': {}}]}
|
||
Openrouter is working today? problem with Aider? or just me.
| 0 |
From Aider im getting all day this:
litellm.APIConnectionError: APIConnectionError: OpenrouterException - 'choices'
The OpenRouter API provider is down or overloaded.
Retrying in 0.2 seconds...
| 2025-03-29T21:44:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmx2j6/openrouter_is_working_today_problem_with_aider_or/
|
9acca9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmx2j6
| false | null |
t3_1jmx2j6
|
/r/LocalLLaMA/comments/1jmx2j6/openrouter_is_working_today_problem_with_aider_or/
| false | false |
self
| 0 | null |
Clien LoRA for Ollama
| 1 |
Been deep in the "vibe coding" world lately and hitting a frustrating wall - I'm poor.
Using Anthropic or OpenRouter is bleeding me dry. I've made solid progress, but scaling anything meaningful costs enough to hurt pretty bad and make me pump the breaks after reviewing my credit purchases. Anyone else feeling this pain?
I've been experimenting with running newer models on my 3090. The code output is surprisingly reliable, though it requires copy-paste testing as the local models can't seem to use Clines instruction set. Currently running VS Code with Claude/RooClaude integration w Claude 3.5 (and sometimes Gemini) which gives amazing control without too much manual work.
Could training be done on local models with Clines instruction set to improve the models ability to use Cline? Would also be awesome to be able to have a LoRA in the specific tech stack that I'm using as well... That'd be langniappe
In short----
- Coding w Cline is expensive
**The missing piece?** The true fix -
Train a LoRA on Clines instruction set that can run on local Ollama model
Has anyone seen development in this direction? Seems like this could democratize AI coding assistance and free us from the financial stranglehold of cloud providers.
Any projects I should know about? Or should I just bite the bullet and start building this myself?
| 2025-03-29T21:57:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmxc94/clien_lora_for_ollama/
|
eatTheRich711
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmxc94
| false | null |
t3_1jmxc94
|
/r/LocalLLaMA/comments/1jmxc94/clien_lora_for_ollama/
| false | false |
self
| 1 | null |
SplitQuantV2: Enhancing Low-Bit Quantization of LLMs Without GPUs
| 34 | 2025-03-29T21:58:54 |
https://arxiv.org/abs/2503.07657
|
nuclearbananana
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmxdgg
| false | null |
t3_1jmxdgg
|
/r/LocalLLaMA/comments/1jmxdgg/splitquantv2_enhancing_lowbit_quantization_of/
| false | false |
default
| 34 | null |
|
Ollama LoRA for Cline Functionality
| 0 |
Been deep in the "vibe coding" world lately and hitting a frustrating wall - I'm poor.
Using Anthropic or OpenRouter is bleeding me dry. I've made solid progress, but scaling anything meaningful costs enough to hurt pretty bad and make me pump the breaks after reviewing my credit purchases. Anyone else feeling this pain?
I've been experimenting with running newer models on my 3090. The code output is surprisingly reliable, though it requires copy-paste testing as the local models can't seem to use Clines instruction set. Currently running VS Code with Claude/RooClaude integration w Claude 3.5 (and sometimes Gemini) which gives amazing control without too much manual work.
Could training be done on local models with Clines instruction set to improve the models ability to use Cline? Would also be awesome to be able to have a LoRA in the specific tech stack that I'm using as well... That'd be langniappe
In short----
- Coding w Cline is expensive
**The missing piece?** The true fix -
Train a LoRA on Clines instruction set that can run on local Ollama model
Has anyone seen development in this direction? Seems like this could democratize AI coding assistance and free us from the financial stranglehold of cloud providers.
Any projects I should know about? Or should I just bite the bullet and start building this myself?
| 2025-03-29T22:00:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmxebp/ollama_lora_for_cline_functionality/
|
eatTheRich711
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmxebp
| false | null |
t3_1jmxebp
|
/r/LocalLLaMA/comments/1jmxebp/ollama_lora_for_cline_functionality/
| false | false |
self
| 0 | null |
Looking for the way to add decent voice AI communication to my app for a low price.
| 0 |
I'm building a native app, like one of those AI therapists/assistants/coaches. Currently I'm using Hume AI for \~$0.07/minute and I'm wondering if there is a way to go cheaper then that?
I've checked ElevenLabs and it' even higher then that.
I doubt that apps like Summit AI, MindChat, Character ai are paying $0.07 for their conversational AI.
I checked livekit, but Not sure how to evaluate the pricing and how exactly it works. Can anyone give me a right direction to cut the costs?
| 2025-03-29T22:55:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmyk2o/looking_for_the_way_to_add_decent_voice_ai/
|
libinpage
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmyk2o
| false | null |
t3_1jmyk2o
|
/r/LocalLLaMA/comments/1jmyk2o/looking_for_the_way_to_add_decent_voice_ai/
| false | false |
self
| 0 | null |
Moondream 2025-03-27 Release
| 168 | 2025-03-29T23:10:50 |
https://moondream.ai/blog/moondream-2025-03-27-release
|
ninjasaid13
|
moondream.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmyvpd
| false | null |
t3_1jmyvpd
|
/r/LocalLLaMA/comments/1jmyvpd/moondream_20250327_release/
| false | false | 168 |
{'enabled': False, 'images': [{'id': 'I4nGtf-_fcbiUzC2EAx5-lrIskf8tUJxYc-D9sBE30s', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=108&crop=smart&auto=webp&s=28f11ef84bdc2da5d75da498f8609fcef4370fe3', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=216&crop=smart&auto=webp&s=f8e2c725a13b079083efc53b381e7d580c10ea29', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=320&crop=smart&auto=webp&s=5ad19eb971b1321efadd48db1d150186bf71c71d', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=640&crop=smart&auto=webp&s=149650fb9eeb4a3684aaac7092e35ed112f39db9', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=960&crop=smart&auto=webp&s=0ee78c000ab2312ad62b63bd8c5ddbc69282ffe8', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?width=1080&crop=smart&auto=webp&s=b40e04df3ff2225d8ac967d350c0799092421a7d', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/GtrXq5esaL1vBtb6j5XRN12_1xTaHr3DjPq8-x_uFDM.jpg?auto=webp&s=1ad14ba6082c02be68517784b90f1e4320e560ed', 'width': 1280}, 'variants': {}}]}
|
||
Recommendations for CPU only server ?
| 1 |
[removed]
| 2025-03-29T23:46:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jmzmmz/recommendations_for_cpu_only_server/
|
un_passant
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jmzmmz
| false | null |
t3_1jmzmmz
|
/r/LocalLLaMA/comments/1jmzmmz/recommendations_for_cpu_only_server/
| false | false |
self
| 1 | null |
Your data doesn't seem safe with Deepseek V3.
| 0 |
Data Usage Policy (Section 4.2)
DeepSeek can use your Inputs and Outputs to "improve services."
Translation: We use your chat history to train our models.
And there us NO WAY TO OPT OUT. Even chatgpt provides AN OPTION TO OPT OUT, if you use their API WITHOUT USING THEIR UI.
Why are we all turning deepseek into a hero when clearly it is not and it is just walking the same path other forerunners are going.
| 2025-03-30T00:32:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn0jgj/your_data_doesnt_seem_safe_with_deepseek_v3/
|
ExtremePresence3030
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn0jgj
| false | null |
t3_1jn0jgj
|
/r/LocalLLaMA/comments/1jn0jgj/your_data_doesnt_seem_safe_with_deepseek_v3/
| false | false |
self
| 0 | null |
Gemini 2.5 Pro unusable for coding?
| 37 |
Something really strange is going on with Gemini 2.5 Pro.
On one hand, it's supposedly the smartest coding model ever made. But on the other hand, I ask it to add one single parameter, and instead of a simple 2-line diff, it generates a 35-line one where it randomly changes logic, removes a time.sleep() from an API call pagination loop, and is generally just totally "drunk" about what I asked it to do. It's somehow both pedantic and drunk at the same time.
Every other model, even much smaller ones, can easily make the 2-line change and leave everything else alone.
I'm wondering how this thing beat the Aider leaderboard. Did something change since the launch?
Setting temp to 0.0 doesn't help either.
| 2025-03-30T00:56:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn10lx/gemini_25_pro_unusable_for_coding/
|
hyperknot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn10lx
| false | null |
t3_1jn10lx
|
/r/LocalLLaMA/comments/1jn10lx/gemini_25_pro_unusable_for_coding/
| false | false |
self
| 37 | null |
Which LLM's are the best and opensource for force generation.
| 1 |
I am planning to build an Agent for code generation and with all the new models coming up I am confused with which model to use, I trying feasibility on Llama 3.3 70B , Qwen 2.5 Coder 32B, Mistral Chat, which was available for free use in their respective website and spaces.
What I found was that, as long as the Code remained simple with less complexities in the given prompt llama did better, but as we increased the complexity Mistral did better than other models mentioned. But grok gave very convincing answers with fewer rewrite, now how to go about building the system, which model to use?
It would be great if you could tell me a model with an API to use (like gradio).
Also I am planning to use an interpreter tool in the chain to interpret the code geneated and send it back if any issue found and planning to use Riza or bearly, any suggestions on this would be great.
TLDR; which code LLM to use with an open API access, if present, and which interpreter tool to use for python in langchain?
| 2025-03-30T01:27:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn1mao/which_llms_are_the_best_and_opensource_for_force/
|
According_Fig_4784
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn1mao
| false | null |
t3_1jn1mao
|
/r/LocalLLaMA/comments/1jn1mao/which_llms_are_the_best_and_opensource_for_force/
| false | false |
self
| 1 | null |
Which LLM's are the best and opensource for code generation.
| 8 |
I am planning to build an Agent for code generation and with all the new models coming up I am confused with which model to use, I trying feasibility on Llama 3.3 70B , Qwen 2.5 Coder 32B, Mistral Chat, which was available for free use in their respective website and spaces.
What I found was that, as long as the Code remained simple with less complexities in the given prompt llama did better, but as we increased the complexity Mistral did better than other models mentioned. But grok gave very convincing answers with fewer rewrite, now how to go about building the system, which model to use?
It would be great if you could tell me a model with an API to use (like gradio).
Also I am planning to use an interpreter tool in the chain to interpret the code geneated and send it back if any issue found and planning to use Riza or bearly, any suggestions on this would be great.
TLDR; which code LLM to use with an open API access, if present, and which interpreter tool to use for python in langchain?
| 2025-03-30T01:29:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn1njb/which_llms_are_the_best_and_opensource_for_code/
|
According_Fig_4784
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn1njb
| false | null |
t3_1jn1njb
|
/r/LocalLLaMA/comments/1jn1njb/which_llms_are_the_best_and_opensource_for_code/
| false | false |
self
| 8 | null |
Is it possible to solder 2GB GDDR6 modules onto an RTX 3090 and flash the BIOS to an A6000? Has anyone tried?
| 1 |
[removed]
| 2025-03-30T01:50:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn20z0/is_it_possible_to_solder_2gb_gddr6_modules_onto/
|
throwaway_7771
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn20z0
| false | null |
t3_1jn20z0
|
/r/LocalLLaMA/comments/1jn20z0/is_it_possible_to_solder_2gb_gddr6_modules_onto/
| false | false |
self
| 1 | null |
Would it be possible to replace GDDR6X chips with GDDR6 as long as the BIOS supports it?
| 1 |
Say for example I have a GPU that ships from the factory with GDDR6X
I would like to replace the GDDR6X modules with GDDR6. I've found a BIOS that will allow me to do so
Are the VRAM modules interchangeable on an electrical level? Does GDDR6X have the same pinouts as GDDR6?
| 2025-03-30T01:54:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn23i5/would_it_be_possible_to_replace_gddr6x_chips_with/
|
Sm0keScr3en
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn23i5
| false | null |
t3_1jn23i5
|
/r/LocalLLaMA/comments/1jn23i5/would_it_be_possible_to_replace_gddr6x_chips_with/
| false | false |
self
| 1 | null |
I'm 14 and built an LLM architecture, but I can't afford to train it.
| 0 |
So I'm 14, made my own LLM architecture, and don't have the funds to train a model on the architecture. It's called IMoE (InterMindofExperts), and it's very modular. It's expensive to train but designed to be super efficient during inference. It's a three-part system with a classifier, experts, and a summarizer. It's inspired by traditional MoEs but designed to solve their most prevalent pain points.
I documented the whole architecture and am planning to implement it with Claude (because I can't run models locally, my computer is that bad). Contrary to what message that sends, this is supposed to put open source ahead, as I believe AI advances much, much, much, much, much faster when things are available to everyone.
I don't have a way to donate set up right now, but I'll make a post saying when I do. If you're interested in supporting this cause and just helping me implement what I've spent a long time on, please consider donating once it's up. This is the link to [my GitHub repo](https://github.com/kirblol/IMoE/).
| 2025-03-30T02:27:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn2p5b/im_14_and_built_an_llm_architecture_but_i_cant/
|
kirbdabirb19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn2p5b
| false | null |
t3_1jn2p5b
|
/r/LocalLLaMA/comments/1jn2p5b/im_14_and_built_an_llm_architecture_but_i_cant/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'YjbgDAnUFiZkUXOHlJBfeAq-H1S4xg4bEadFeBP3FjU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=108&crop=smart&auto=webp&s=d6f2efbc7eb5171b04928055f513237245972137', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=216&crop=smart&auto=webp&s=12b962ede1d27e980e0e99197778e4d88eda183b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=320&crop=smart&auto=webp&s=67ab7a8a614cadfd75e04eb581faebb225f9ada4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=640&crop=smart&auto=webp&s=1de9e0e4db1b9ee2d8aceb3bad9a32d35d42ef2f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=960&crop=smart&auto=webp&s=8be54295aa69e5071e66080f3b54d2374f14b512', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?width=1080&crop=smart&auto=webp&s=6384de7aeb5f93f2285bcea7a215bf0de91203a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/otWQFpBKM_wgHka6f6U5UEim8wxBRYDwNlpkDSyVJdo.jpg?auto=webp&s=31c05040c560b7a1dd36d4ad2357ea3f36427248', 'width': 1200}, 'variants': {}}]}
|
Best 'Entry' Level AMD 16gb GPU?
| 0 |
Hi all,
I'm just starting out on running local LLM's. On Linux, Pop!Os specifically. I'll likely be using Ollama for my tasks as it integrates well with my system.
I plan to be using AI for things like summarizing documents, sorting and categorizing docs etc. I don't plan to do much machine learning etc, just looking at integrating AI into my workflow. A typical document might be 10-50 pages of a book. Or a bunch of pdf's etc.
Finally, I'd like to do some basic data crunching, regression etc for datasets. Nothing ginormous, just basic stuff.
I'm open to Nvidia, but I look at this gpu as 'entry level' meaning if I can integrate AI into my workflow in a significant way, I'll then look at something like a Nvidia 'Digits' dedicated computer for serving AI. Or maybe a beefier, heavy duty Nvidia card for 5-6x the price. I'd like to keep it under $600 USD or so at this point. So it seems I can get more bang for my limited buck from AMD.
I'm running an ASUS x570 motherboard with 64gb ram, 8 cores if it matters.
Looking forward to some genius suggestions!
| 2025-03-30T02:55:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn370u/best_entry_level_amd_16gb_gpu/
|
JohannesComstantine
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn370u
| false | null |
t3_1jn370u
|
/r/LocalLLaMA/comments/1jn370u/best_entry_level_amd_16gb_gpu/
| false | false |
self
| 0 | null |
finetune llm to make comfyui workflow
| 1 |
Hello, I'm new to the field of LLM training. I'm thinking of finetuning a small, open-source model as an initial step towards creating and editing images through prompt only, where it will be trained on ComfyUI JSON text files. What are good, lightweight, open-source models suitable for this task? I believe there are many datasets available, but if there are any additional tips, I'd be happy to discuss them
| 2025-03-30T03:20:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn3miw/finetune_llm_to_make_comfyui_workflow/
|
kigy_x
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn3miw
| false | null |
t3_1jn3miw
|
/r/LocalLLaMA/comments/1jn3miw/finetune_llm_to_make_comfyui_workflow/
| false | false |
self
| 1 | null |
Suitable lightweight LLMs for MCP and Generative AI
| 1 |
[removed]
| 2025-03-30T04:14:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn4ix3/suitable_lightweight_llms_for_mcp_and_generative/
|
Alphawolf3824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn4ix3
| false | null |
t3_1jn4ix3
|
/r/LocalLLaMA/comments/1jn4ix3/suitable_lightweight_llms_for_mcp_and_generative/
| false | false |
self
| 1 | null |
Suitable lightweight LLM for running with MCP and Generative AI (Rhino+Grasshopper,Comfyui)
| 1 |
[removed]
| 2025-03-30T04:27:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn4qb3/suitable_lightweight_llm_for_running_with_mcp_and/
|
Alphawolf3824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn4qb3
| false | null |
t3_1jn4qb3
|
/r/LocalLLaMA/comments/1jn4qb3/suitable_lightweight_llm_for_running_with_mcp_and/
| false | false |
self
| 1 | null |
Google’s Titan Architecture
| 1 |
[removed]
| 2025-03-30T04:29:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn4rfw/googles_titan_architecture/
|
GrapplerGuy100
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn4rfw
| false | null |
t3_1jn4rfw
|
/r/LocalLLaMA/comments/1jn4rfw/googles_titan_architecture/
| false | false |
self
| 1 | null |
RAG Observations
| 0 |
I’ve been into computers for a long time. I started out programming in BASIC years ago, and while I’m not a developer AT ALL, I’ve always enjoyed messing with tech. I have been exploring AI, especially local LLMs and I am interested how RAG systems can help.
Right now I’m trying to build (with AI "help") a lightweight AI Help Desk that uses a small language model with a highly optimized RAG backend. The goal is to see how much performance I can get out of a low-resource setup by focusing on smart retrieval. I’m aiming to use components like **e5-small-v2** for dense embeddings, **BM25** for sparse keyword matching, and **UPR** for unsupervised re-ranking to tighten up the results. This is taking a while. UGH!
While working on this project I’ve also been converting raw data into semantically meaningful chunks optimized for retrieval in a RAG setup. So I wanted to see how this would perform in a test. So I tried a couple easy to use systems...
While testing platforms like AnythingLLM and LM Studio, even with larger models like Gemma 3 12B, I noticed a surprising amount of hallucination, even when feeding in a small, well-structured sample database. It raised some questions for me:
Are these tools doing shallow or naive retrieval that undermines the results
Is the model ignoring the retrieved context, or is the chunking strategy too weak?
With the right retrieval pipeline, could a smaller model actually perform more reliably?
What am I doing wrong?
I understand those platforms are meant to be user-friendly and generalized, but I’m aiming for something a bit more deliberate and fine-tuned. Just curious if others have run into similar issues or have insights into where things tend to fall apart in these implementations.
Thanks!
| 2025-03-30T05:28:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn5ngq/rag_observations/
|
v1sual3rr0r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn5ngq
| false | null |
t3_1jn5ngq
|
/r/LocalLLaMA/comments/1jn5ngq/rag_observations/
| false | false |
self
| 0 | null |
Sesame CSM, and building an open source realtime version, and more…
| 1 |
[removed]
| 2025-03-30T05:31:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn5p23/sesame_csm_and_building_an_open_source_realtime/
|
Specialist-Value-378
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn5p23
| false | null |
t3_1jn5p23
|
/r/LocalLLaMA/comments/1jn5p23/sesame_csm_and_building_an_open_source_realtime/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ALvO0UwZODj7Mx_z9pqYh4rE5zXNOaoNgZoy7Ex9bPM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=108&crop=smart&auto=webp&s=37506e3ac24db95dc5545b67defdf1a8d2d00c04', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=216&crop=smart&auto=webp&s=df6ebf82293a9f4e65f7d164088b16844960fd36', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=320&crop=smart&auto=webp&s=a568449f2fc06377c18158cae96b21b30ea54c6b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=640&crop=smart&auto=webp&s=1c2b382f99e013187fac6c4280a099933e4b0d47', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=960&crop=smart&auto=webp&s=b8fe6bc282b17a7831decfab7e61978156af4fc5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?width=1080&crop=smart&auto=webp&s=092668d4239bd2181ed1011846370fbbbfb2cb20', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/0Oou01TWJtl4pCxAfr8J8uVDtgZi-bGwYpd_n05vFC0.jpg?auto=webp&s=6947dcbd44381523b0c1b480eac830d1e29bddbc', 'width': 1200}, 'variants': {}}]}
|
MacBook M4 Max isn't great for LLMs
| 430 |
I had M1 Max and recently upgraded to M4 Max - inferance speed difference is huge improvement (~3x) but it's still much slower than 5 years old RTX 3090 you can get for 700$ USD.
While it's nice to be able to load large models, they're just not gonna be very usable on that machine. An example - pretty small 14b distilled Qwen 4bit quant runs pretty slow for coding (40tps, with diff frequency failing so needs to redo), and quality is very low. 32 bit is pretty unusable via Roo Code and Cline because of low speed.
And this is the best a money can buy you as Apple laptop.
Those are very pricey machines and I don't see any mentions that they aren't practical for local AI. You likely better off getting 1-2 generations old Nvidia rig if really need it, or renting, or just paying for API, as quality/speed will be day and night without upfront cost.
If you're getting MBP - save yourselves thousands $ and just get minimal ram you need with a bit extra SSD, and use more specialized hardware for local AI.
It's an awesome machine, all I'm saying - it prob won't deliver if you have high AI expectations for it.
| 2025-03-30T05:42:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn5uto/macbook_m4_max_isnt_great_for_llms/
|
val_in_tech
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn5uto
| false | null |
t3_1jn5uto
|
/r/LocalLLaMA/comments/1jn5uto/macbook_m4_max_isnt_great_for_llms/
| false | false |
self
| 430 | null |
Install Llama-cpp-python locally
| 1 |
[removed]
| 2025-03-30T05:44:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn5vvg/install_llamacpppython_locally/
|
Turbulent-Log5758
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn5vvg
| false | null |
t3_1jn5vvg
|
/r/LocalLLaMA/comments/1jn5vvg/install_llamacpppython_locally/
| false | false |
self
| 1 | null |
Setting up LLaMA for real-time screen analysis
| 1 |
[removed]
| 2025-03-30T06:09:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn6818/setting_up_llama_for_realtime_screen_analysis/
|
mynewopportunities
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn6818
| false | null |
t3_1jn6818
|
/r/LocalLLaMA/comments/1jn6818/setting_up_llama_for_realtime_screen_analysis/
| false | false |
self
| 1 | null |
Performance tuning: Running LLaMA on consumer hardware
| 1 |
[removed]
| 2025-03-30T06:10:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn68uy/performance_tuning_running_llama_on_consumer/
|
mynewopportunities
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn68uy
| false | null |
t3_1jn68uy
|
/r/LocalLLaMA/comments/1jn68uy/performance_tuning_running_llama_on_consumer/
| false | false |
self
| 1 | null |
CUDA GPUs vs Price Tradeoff (Local CSM/Sesame on RX GPU)
| 1 |
Is it possible to run a LLama 1B locally alongside another [model ](https://huggingface.co/sesame/csm-1b)that explicitly mentions the need to have CUDA-compatible hardware (CUDA 12.4 or 12.6) on a RX GPU with a CUDA adapter (ZLUDA or another variety) with 16-20GB VRAM and get similar native-CUDA performance?
Now, is the potential better performance by running in a NVidia GPU worth \~800$? I'm not technically in a budget, but I'd prefer not to burn all my cash given the GPU market.
I'm trying to get \~20 T/s on 1B LLama, ***at least***. Running it on the cloud it's not an option.
| 2025-03-30T07:00:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn6w95/cuda_gpus_vs_price_tradeoff_local_csmsesame_on_rx/
|
elchurnerista
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn6w95
| false | null |
t3_1jn6w95
|
/r/LocalLLaMA/comments/1jn6w95/cuda_gpus_vs_price_tradeoff_local_csmsesame_on_rx/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'gbs4qZh1spFWJrhwRhfcgIdUX8vc72SKePKYNp-dFiM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=108&crop=smart&auto=webp&s=55a96739397cb41ce3f3d7ab27648dec70aec8f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=216&crop=smart&auto=webp&s=052046859f06bfeb8bc60adcfb1132b10e5a0cb7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=320&crop=smart&auto=webp&s=df6a15b03946c20cf8716aae62e10402c6f2ea04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=640&crop=smart&auto=webp&s=7603ead90be1c330c43e0464d2d0b82c255b3eca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=960&crop=smart&auto=webp&s=f288a1a5f0e9e9dfdaa7e38f6a08bac2ffb6a7ad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?width=1080&crop=smart&auto=webp&s=ee7e03bfebec664736bbc1f7247466c0ec00b54f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oxoAqvccjAn6DSkiGJ1pHY7r2EHuzKX7FpDOEM-XACA.jpg?auto=webp&s=e537439763eccbca62dc915f3a8694da436ce0c2', 'width': 1200}, 'variants': {}}]}
|
Agentic coding with LLMs
| 0 |
Is anyone successfully using agents to create code with local LLMs? Where the files are written for you with tooling. Rather than just copy and pasting the code in to files you have created yourself.
| 2025-03-30T07:21:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn766v/agentic_coding_with_llms/
|
mobileappz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn766v
| false | null |
t3_1jn766v
|
/r/LocalLLaMA/comments/1jn766v/agentic_coding_with_llms/
| false | false |
self
| 0 | null |
Planning an initial (but future-proofed) build of rackmount inference rig
| 1 |
[removed]
| 2025-03-30T07:26:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn78im/planning_an_initial_but_futureproofed_build_of/
|
spottypress
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn78im
| false | null |
t3_1jn78im
|
/r/LocalLLaMA/comments/1jn78im/planning_an_initial_but_futureproofed_build_of/
| false | false | 1 | null |
|
Exo and Thunderbolt 5 link aggregation on Mac studio ?
| 0 |
Suppose I have two Mac studio each having four Thunderbolt 5 ports.
I was wondering if it’s possible to cluster them together. Say by connecting them using two or more thunderbolt 5 ports to increase bandwidth for Exo.
Is it possible now or ever?. I think the hardware allows it but I don’t know about Mac OS or Exo.
| 2025-03-30T07:51:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn7kmq/exo_and_thunderbolt_5_link_aggregation_on_mac/
|
No_Conversation9561
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn7kmq
| false | null |
t3_1jn7kmq
|
/r/LocalLLaMA/comments/1jn7kmq/exo_and_thunderbolt_5_link_aggregation_on_mac/
| false | false |
self
| 0 | null |
Possible to run Qwen2.5-VL-32B-Instruct locally?
| 0 |
I want to run the Qwen2.5-VL-32B-Instruct locally but unsure. Currently I am messing around with Qwen2.5-VL-7B-Instruct using vllm and I cant get my head around the gpu memory optimizations I make. This is the command I use to serve the model (succesfully, flash attention is also installed):
>vllm serve Qwen/Qwen2.5-VL-7B-Instruct --port 8000 --host [0.0.0.0](http://0.0.0.0) \--dtype bfloat16 --max-model-len 32768 --gpu-memory-utilization 0.98
While this command gives error (CUDA error: invalid argument):
>vllm serve Qwen/Qwen2.5-VL-7B-Instruct --port 8000 --host [0.0.0.0](http://0.0.0.0) \--dtype bfloat16 --max-model-len 32768 --gpu-memory-utilization 0.98 --cpu\_offload\_gb 4
I want to know if I can run the 32B variant using any quantization on my hardware and if yes, what framework/ model hosting library I can use.
My HW setup:
AMD Ryzen 9 3950x CPU 16 gb ram (will add more) 1x rtx 3090 2TB storage
Edit1: I need the best performance possible and also be able to run the quantized models.
| 2025-03-30T08:42:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn87nf/possible_to_run_qwen25vl32binstruct_locally/
|
BABA_yaaGa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn87nf
| false | null |
t3_1jn87nf
|
/r/LocalLLaMA/comments/1jn87nf/possible_to_run_qwen25vl32binstruct_locally/
| false | false |
self
| 0 | null |
How do I fine tune an LLM that mimics Reddit comments and isn't too 'AI-generated'?
| 1 |
[removed]
| 2025-03-30T08:45:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn89ax/how_do_i_fine_tune_an_llm_that_mimics_reddit/
|
xkcd690
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn89ax
| false | null |
t3_1jn89ax
|
/r/LocalLLaMA/comments/1jn89ax/how_do_i_fine_tune_an_llm_that_mimics_reddit/
| false | false |
self
| 1 | null |
What is the point of local models?
| 1 |
[removed]
| 2025-03-30T09:19:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn8owk/what_is_the_point_of_local_models/
|
alphanumericsprawl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn8owk
| false | null |
t3_1jn8owk
|
/r/LocalLLaMA/comments/1jn8owk/what_is_the_point_of_local_models/
| false | false |
self
| 1 | null |
Removing experts from DeepSeek R1
| 1 |
[removed]
| 2025-03-30T09:43:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn900j/removing_experts_from_deepseek_r1/
|
datbackup
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn900j
| false | null |
t3_1jn900j
|
/r/LocalLLaMA/comments/1jn900j/removing_experts_from_deepseek_r1/
| false | false |
self
| 1 | null |
A good model to listen to me rant on niche topics?
| 8 |
I’ve had a good time with people’s suggestions in here when I was looking for models for different purposes, so I was hoping I could get help here again.
I’m looking for a model that’ll hear me rant on niche video game/ fiction universes and ask questions about it. The few models I’ve tested either derail too much or don’t really care about listening.
The searchbar on the huggingface site wasn’t that useful since models usually use tags on searches and I’m not that good on searching models. I’m kinda desperate now
| 2025-03-30T09:47:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn9206/a_good_model_to_listen_to_me_rant_on_niche_topics/
|
Mynameisjeff121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn9206
| false | null |
t3_1jn9206
|
/r/LocalLLaMA/comments/1jn9206/a_good_model_to_listen_to_me_rant_on_niche_topics/
| false | false |
self
| 8 | null |
No CPU used in LM Studio, but not llama-server.
| 1 |
[removed]
| 2025-03-30T09:55:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn95si/no_cpu_used_in_lm_studio_but_not_llamaserver/
|
redbook2000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn95si
| false | null |
t3_1jn95si
|
/r/LocalLLaMA/comments/1jn95si/no_cpu_used_in_lm_studio_but_not_llamaserver/
| false | false |
self
| 1 | null |
How to configure the llama-server to run on GPU only ?
| 1 |
[removed]
| 2025-03-30T10:04:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn9a55/how_to_configure_the_llamaserver_to_run_on_gpu/
|
redbook2000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn9a55
| false | null |
t3_1jn9a55
|
/r/LocalLLaMA/comments/1jn9a55/how_to_configure_the_llamaserver_to_run_on_gpu/
| false | false |
self
| 1 | null |
I built a tool to quickly connect backend APIs to LLMs – Would Love Your Feedback!
| 1 |
[removed]
| 2025-03-30T10:07:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn9b9f/i_built_a_tool_to_quickly_connect_backend_apis_to/
|
the_fore_seeken_cada
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn9b9f
| false | null |
t3_1jn9b9f
|
/r/LocalLLaMA/comments/1jn9b9f/i_built_a_tool_to_quickly_connect_backend_apis_to/
| false | false |
self
| 1 | null |
This is the Reason why I am Still Debating whether to buy RTX5090!
| 39 | 2025-03-30T10:27:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn9klk/this_is_the_reason_why_i_am_still_debating/
|
Iory1998
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn9klk
| false | null |
t3_1jn9klk
|
/r/LocalLLaMA/comments/1jn9klk/this_is_the_reason_why_i_am_still_debating/
| false | false | 39 | null |
||
Good prompts for memorizing chats like ChatGPT does
| 0 |
Hi. I'm coding a voice assistant (GitHub: [Windows/Linux](https://github.com/Edw590/VISOR---A-Voice-Assistant) and [Android](https://github.com/Edw590/VISOR---Android-Version-Assistant) in case anyone is interested) and I've coded memory integration into it. But I can't figure out how to make a prompt for it to memorize the chats decently. This is my current prompt:
User messages (in JSON): [JSON here]. Write NEW things you've learned from this specific conversation (EXCLUDING YOUR MEMORIES) in BULLET points (no + or - or anything. ONLY *). Format the output as \"* [detail]\". IGNORE specific, temporary events, schedules, or day-to-day plans. Summarize as KEY GENERAL information. If there is nothing, write "* 3234_NONE".
But it doesn't exclude the memories... Sometimes it does, but most times it doesn't. Because I give this prompt to the model having the system prompt with all user memories.
So I also thought in giving this to the LLM without the user memories so that it will memorize everything and then I "just" remove the duplicates - the LLM decided to remove important information like my name...
Does anyone know how ChatGPT does it?
| 2025-03-30T10:49:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jn9v16/good_prompts_for_memorizing_chats_like_chatgpt/
|
DADi590
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jn9v16
| false | null |
t3_1jn9v16
|
/r/LocalLLaMA/comments/1jn9v16/good_prompts_for_memorizing_chats_like_chatgpt/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'IZ8HOoZ-DBb_2jJ0y6Y8Vy83qprZ55hjsshr9i94jDk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=108&crop=smart&auto=webp&s=3076c166083206a941c621739393d9ba305a05e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=216&crop=smart&auto=webp&s=30daf96fc2d87b01c86259c1428a05eb461c3bd4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=320&crop=smart&auto=webp&s=a3793c01f9bf219f0c5553041e91ecaa6a22520d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=640&crop=smart&auto=webp&s=16db968ef1f70d7180333f79f96e8c75d631ecf2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=960&crop=smart&auto=webp&s=0abead470d38e7bda4d4fb53f7244363428d0d7e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?width=1080&crop=smart&auto=webp&s=3a76bf05a40d97e130c714aab87e7987276d345e', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gO5wEFfhwnzOJkdIjVjndD-MI_nVDAvffij2uivFrA0.jpg?auto=webp&s=3e75f420fb95bd71212904a15a447c4e6994449c', 'width': 1280}, 'variants': {}}]}
|
F5 TTS fine tuning transcription issue
| 0 |
I tried to fine-tune F5 TTS for the Tamil language. Although the audio I used is very clear, the transcription generated by their webUI is totally different from the audio. What could be the issue? Has anyone faced this?
| 2025-03-30T11:02:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jna25f/f5_tts_fine_tuning_transcription_issue/
|
the_professor000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jna25f
| false | null |
t3_1jna25f
|
/r/LocalLLaMA/comments/1jna25f/f5_tts_fine_tuning_transcription_issue/
| false | false |
self
| 0 | null |
Can someone explain why it can write in my language and then suddenly deletes it and starts in English?
| 1 | 2025-03-30T11:17:08 |
https://v.redd.it/651xx74c9tre1
|
dwartbg9
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jna9l9
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/651xx74c9tre1/DASHPlaylist.mpd?a=1745925444%2CYWIyODk1ZGM1NTIzODY0ODFiM2IyZDhmNTgzNDk1ZTlmZDlmZTRiYmI0NzZkODFiMjg5MmU0MDVmODZkNTQ0Mg%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/651xx74c9tre1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 808, 'hls_url': 'https://v.redd.it/651xx74c9tre1/HLSPlaylist.m3u8?a=1745925444%2CYzI0MzllOTQxYjNkNDViNGFhZWViYmVjMDU1OWRjNDRmMDMwNzNhNGM0YzkyOWQ4MWUxMzNjODc2YjU4N2I4Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/651xx74c9tre1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1jna9l9
|
/r/LocalLLaMA/comments/1jna9l9/can_someone_explain_why_it_can_write_in_my/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=108&crop=smart&format=pjpg&auto=webp&s=3b29137a9e79c5661282ac4e60697210b0326c0c', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=216&crop=smart&format=pjpg&auto=webp&s=a6121e8b0752f2e44d6542f98221159ecf0e4acc', 'width': 216}, {'height': 359, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=320&crop=smart&format=pjpg&auto=webp&s=1b7496a74d53076bf4f3646ab24e1581ec6cecda', 'width': 320}, {'height': 718, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=640&crop=smart&format=pjpg&auto=webp&s=a756e0508f5168f77f2a50cfe34372a08f2d8982', 'width': 640}, {'height': 1077, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=960&crop=smart&format=pjpg&auto=webp&s=085cdd0f6ff3ec193e7115a948a521704887bc5e', 'width': 960}, {'height': 1212, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2bdff659a8559ecdd8ce7087bfa82550cf9d7513', 'width': 1080}], 'source': {'height': 1212, 'url': 'https://external-preview.redd.it/dzQ2M29ueWI5dHJlMXw6NI402wR-Fl2At5NYoq--qnHyNdd5jMyAVb-VmEal.png?format=pjpg&auto=webp&s=0846ced44e206f28ddcb58c7185016b94287ab71', 'width': 1080}, 'variants': {}}]}
|
||
Best LLM to run locally on integrated graphics?
| 0 |
I've got a laptop, with no GPU. Only integrated graphics, something like a 14th gen i7. 16GB of RAM.
I'm trying to find the best combination of settings on LM Studio, for a small LLM that can write code.
At the moment, I've got deepseek r1 running, and it produces a good output. It only takes about a minute for the laptop to process it and give me some code that I've asked for.
Has anyone else had results on integrated graphics only? What's the best LLM and settings for that?
| 2025-03-30T11:40:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnaltk/best_llm_to_run_locally_on_integrated_graphics/
|
Typical-Length-1405
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnaltk
| false | null |
t3_1jnaltk
|
/r/LocalLLaMA/comments/1jnaltk/best_llm_to_run_locally_on_integrated_graphics/
| false | false |
self
| 0 | null |
Proxmox LXC vs Ubuntu Baremetal Speed?
| 1 |
[removed]
| 2025-03-30T11:50:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnar97/proxmox_lxc_vs_ubuntu_baremetal_speed/
|
boredPampers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnar97
| false | null |
t3_1jnar97
|
/r/LocalLLaMA/comments/1jnar97/proxmox_lxc_vs_ubuntu_baremetal_speed/
| false | false |
self
| 1 | null |
How do I fine tune an LLM that mimics Reddit comments and isn't too 'AI-generated'?
| 1 |
[removed]
| 2025-03-30T12:00:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnawbl/how_do_i_fine_tune_an_llm_that_mimics_reddit/
|
xkcd690
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnawbl
| false | null |
t3_1jnawbl
|
/r/LocalLLaMA/comments/1jnawbl/how_do_i_fine_tune_an_llm_that_mimics_reddit/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.