title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Petition to OpenAI to fix model naming before going Open Source
| 1 |
[removed]
| 2025-04-17T07:26:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k16zsv/petition_to_openai_to_fix_model_naming_before/
|
One_Key_8127
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k16zsv
| false | null |
t3_1k16zsv
|
/r/LocalLLaMA/comments/1k16zsv/petition_to_openai_to_fix_model_naming_before/
| false | false |
self
| 1 | null |
vLLM vs TensorRT-LLM
| 13 |
vLLM seems to offer much more support for new models compared to TensorRT-LLM. Why does NVIDIA technology offer such little support? Does this mean that everyone in datacenters is using vLLM?
What would be the most production ready way to deploy LLMs in Kubernetes on-prem?
* Kubernetes and vLLM
* Kubernetes, tritonserver and vLLM
* etc...
Second question for on prem. In a scenario where you have limited GPU (for example 8xH200s) and demand is getting too high for the current deployment, can you increase batch size by deploying a smaller model (fp8 instead of bf16, Q4 instead of fp8)? Im mostly thinking that deploying a second model will cause a 2 minute disruption of service which is not very good. Although this could be solved by having a small model respond to those in the 2 minute switch.
Happy to know what others are doing in this regard.
| 2025-04-17T07:30:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1722g/vllm_vs_tensorrtllm/
|
Maokawaii
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1722g
| false | null |
t3_1k1722g
|
/r/LocalLLaMA/comments/1k1722g/vllm_vs_tensorrtllm/
| false | false |
self
| 13 | null |
OpenAI new model naming scheme suggestion
| 1 |
[removed]
| 2025-04-17T07:39:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k176l2/openai_new_model_naming_scheme_suggestion/
|
One_Key_8127
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k176l2
| false | null |
t3_1k176l2
|
/r/LocalLLaMA/comments/1k176l2/openai_new_model_naming_scheme_suggestion/
| false | false |
self
| 1 | null |
Should I Learn AI Models and Deep Learning from Scratch to Build My AI Chatbot?
| 1 |
[removed]
| 2025-04-17T07:42:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k177t1/should_i_learn_ai_models_and_deep_learning_from/
|
Alone-Breadfruit-994
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k177t1
| false | null |
t3_1k177t1
|
/r/LocalLLaMA/comments/1k177t1/should_i_learn_ai_models_and_deep_learning_from/
| false | false |
self
| 1 | null |
how to Hire someone that knows 'AI'
| 1 |
[removed]
| 2025-04-17T07:45:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1798l/how_to_hire_someone_that_knows_ai/
|
Interesting_Sock2308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1798l
| false | null |
t3_1k1798l
|
/r/LocalLLaMA/comments/1k1798l/how_to_hire_someone_that_knows_ai/
| false | false |
self
| 1 | null |
Should I Learn AI Models and Deep Learning from Scratch to Build My AI Chatbot?
| 1 |
[removed]
| 2025-04-17T07:46:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k179op/should_i_learn_ai_models_and_deep_learning_from/
|
Alone-Breadfruit-994
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k179op
| false | null |
t3_1k179op
|
/r/LocalLLaMA/comments/1k179op/should_i_learn_ai_models_and_deep_learning_from/
| false | false |
self
| 1 | null |
Python use
| 1 |
[removed]
| 2025-04-17T07:50:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17bef/python_use/
|
lgx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17bef
| false | null |
t3_1k17bef
|
/r/LocalLLaMA/comments/1k17bef/python_use/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'H1DyqLaqFs5jXKpWZb6aWni95HihioCigYKePodMVDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=108&crop=smart&auto=webp&s=23f33608c2ec459e067c8d22dd0d3dbad2b644f4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=216&crop=smart&auto=webp&s=4fd1e30b92c595d3789e168ce995cfec6baac784', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=320&crop=smart&auto=webp&s=fc1501ca1450883dce5c26e10aa098c60f3c5dab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=640&crop=smart&auto=webp&s=4df5c951c0369345a305c79ae19742409e9ce4e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=960&crop=smart&auto=webp&s=d55da3860ff4d09fc0c56610e0040e339254a20c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?width=1080&crop=smart&auto=webp&s=7dbe0c24fbb5cf05b5f118c3074a82110e7d9a80', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9-OCsPqqsKBpHWkEBgKkr42qQ7ebbB1jcYeMD6j2mv4.jpg?auto=webp&s=d4f418008d751368933e427816002fc06b93a266', 'width': 1200}, 'variants': {}}]}
|
Help with choosing between MacMini and MacStudio
| 0 |
Hello,
I’ve recently developed a passion for LLMs and I’m currently experimenting with tools like LM Studio and Autogen Studio to try building efficient, fully local solutions.
At the moment, I’m using my MacBook Pro M1 (2021) with 16GB of RAM, which limits me to smaller models like Gemma 3 12B (q4) and short contexts (8000 tokens), which already push my MacBook to its limits.
I’m therefore considering getting a Mac Mini or a Mac Studio (without a display, accessed remotely from my MacBook) to gain more power.
I’m hesitating between two options:
• Mac Mini (Apple M4 Pro chip with 14-core CPU, 20-core GPU, 16-core Neural Engine) with 64GB RAM – price: €2950
• Mac Studio (Apple M4 Max chip with 16-core CPU, 40-core GPU, 16-core Neural Engine) with 128GB RAM – price: €4625
That’s a difference of over €1500, which is quite significant and makes the decision difficult. I would likely be limited to 30B models on the Mac Mini, while the Mac Studio could probably handle 70B models without much trouble.
As for how I plan to use these LLMs, here’s what I have in mind so far:
• coding assistance (mainly in Python for research in applied mathematics)
• analysis of confidential documents, generating summaries and writing reports (for my current job)
• assistance with writing short stories (personal project)
Of course, for the first use case, it’s probably cheaper to go with proprietary solutions (OpenAI, Gemini, etc.), but the confidentiality requirements of the second point and the personal nature of the third make me lean towards local solutions.
Anyway, that’s where my thoughts are at—what do you think?
Thanks!
| 2025-04-17T07:53:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17csg/help_with_choosing_between_macmini_and_macstudio/
|
Gladstone025
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17csg
| false | null |
t3_1k17csg
|
/r/LocalLLaMA/comments/1k17csg/help_with_choosing_between_macmini_and_macstudio/
| false | false |
self
| 0 | null |
Should I Learn AI Models and Deep Learning from Scratch to Build My AI Chatbot?
| 1 |
[removed]
| 2025-04-17T08:01:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17guj/should_i_learn_ai_models_and_deep_learning_from/
|
Alone-Breadfruit-994
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17guj
| false | null |
t3_1k17guj
|
/r/LocalLLaMA/comments/1k17guj/should_i_learn_ai_models_and_deep_learning_from/
| false | false |
self
| 1 | null |
Help with a Multi-Agent Chatbot
| 1 |
[removed]
| 2025-04-17T08:03:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17i0p/help_with_a_multiagent_chatbot/
|
CardiologistLiving51
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17i0p
| false | null |
t3_1k17i0p
|
/r/LocalLLaMA/comments/1k17i0p/help_with_a_multiagent_chatbot/
| false | false |
self
| 1 | null |
Nvidia 5000 cards had some FP4 advantage if I remember correctly.
| 1 |
[removed]
| 2025-04-17T08:04:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17if1/nvidia_5000_cards_had_some_fp4_advantage_if_i/
|
Roubbes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17if1
| false | null |
t3_1k17if1
|
/r/LocalLLaMA/comments/1k17if1/nvidia_5000_cards_had_some_fp4_advantage_if_i/
| false | false |
self
| 1 | null |
Just built an AI meme generator app – what do you think of this meme? Would you subscribe if I release it?
| 1 |
[removed]
| 2025-04-17T08:04:37 |
FlanStreet4022
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17ifx
| false | null |
t3_1k17ifx
|
/r/LocalLLaMA/comments/1k17ifx/just_built_an_ai_meme_generator_app_what_do_you/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'WDaBlWp5WYU9gYXpKWJmQjL7c8zl5fsjLekHFDmiAgs', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/zfy41smfqcve1.png?width=108&crop=smart&auto=webp&s=42ec89e9b806444a5df55d1861f649bac0b36757', 'width': 108}, {'height': 244, 'url': 'https://preview.redd.it/zfy41smfqcve1.png?width=216&crop=smart&auto=webp&s=68fb0be95493ea50bed4fabdb2b0f3f9121c51e9', 'width': 216}, {'height': 361, 'url': 'https://preview.redd.it/zfy41smfqcve1.png?width=320&crop=smart&auto=webp&s=dc5fcb4e56121fa39d4444ea1e284b078ac0c039', 'width': 320}, {'height': 723, 'url': 'https://preview.redd.it/zfy41smfqcve1.png?width=640&crop=smart&auto=webp&s=5dd10b714e9414848d27918ee5f7dd7d082e3028', 'width': 640}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/zfy41smfqcve1.png?auto=webp&s=4f5877a440c670e0f05c4ae2c64964f73bcc68e9', 'width': 906}, 'variants': {}}]}
|
||
LLaMA 3.1 8B Concurrent user requests
| 1 |
[removed]
| 2025-04-17T08:10:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17kxb/llama_31_8b_concurrent_user_requests/
|
Ghungroo_Seth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17kxb
| false | null |
t3_1k17kxb
|
/r/LocalLLaMA/comments/1k17kxb/llama_31_8b_concurrent_user_requests/
| false | false |
self
| 1 | null |
[Discussion] LLaMA 3.1 8B Concurrent user requests
| 1 |
[removed]
| 2025-04-17T08:13:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17mfq/discussion_llama_31_8b_concurrent_user_requests/
|
Ghungroo_Seth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17mfq
| false | null |
t3_1k17mfq
|
/r/LocalLLaMA/comments/1k17mfq/discussion_llama_31_8b_concurrent_user_requests/
| false | false |
self
| 1 | null |
i think reasoning model and base model hit the wall we need some new technique to achieve the agi .
| 0 |
oday I saw the benchmark results, and I’m pretty sure OpenAI is working on a different technique now. They’re not going to stick with the same reasoning-based approach they’re likely exploring a new architecture, I’m almost certain of it.
Other AI labs too. I have high hopes for DeepSeek.
There’s no doubt we’ll achieve a superhuman-level coder by the end of the year, but that still won’t be AGI.
meta is already loss the race of the open source agi they are 6, 10 month behind from the qwen , deepseek .
is anybody have idea what new technique open ai guys area using for there new model .
| 2025-04-17T08:21:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k17qde/i_think_reasoning_model_and_base_model_hit_the/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17qde
| false | null |
t3_1k17qde
|
/r/LocalLLaMA/comments/1k17qde/i_think_reasoning_model_and_base_model_hit_the/
| false | false |
self
| 0 | null |
Electron-BitNet has been updated to support Microsoft's official model "BitNet-b1.58-2B-4T"
| 82 |
If you didn't notice, Microsoft dropped their first official BitNet model the other day!
[https://huggingface.co/microsoft/BitNet-b1.58-2B-4T](https://huggingface.co/microsoft/BitNet-b1.58-2B-4T)
[https://arxiv.org/abs/2504.12285](https://arxiv.org/abs/2504.12285)
This MASSIVELY improves the BitNet model; the prior BitNet models were kinda goofy, but this model is capable of actually outputting code and makes sense!
[https://i.imgur.com/koy2GEy.jpeg](https://i.imgur.com/koy2GEy.jpeg)
| 2025-04-17T08:30:49 |
https://github.com/grctest/Electron-BitNet/releases/latest
|
ufos1111
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k17uv0
| false | null |
t3_1k17uv0
|
/r/LocalLLaMA/comments/1k17uv0/electronbitnet_has_been_updated_to_support/
| false | false | 82 |
{'enabled': False, 'images': [{'id': 'dPO1rdVPcQIjDtaC-_nSO1s6eOYinb_Ulxl0jnuAaYg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=108&crop=smart&auto=webp&s=02c3cad69ed614a83e8397c5411584e1198a18a0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=216&crop=smart&auto=webp&s=5a674847c9cb604c7c87bd87143510c847e0d10b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=320&crop=smart&auto=webp&s=93dfafc234a1f822faca4ac3c9c8b3039ad52f91', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=640&crop=smart&auto=webp&s=20fe5f99e834640910b09c01898d1ad1e042db98', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=960&crop=smart&auto=webp&s=8f44299b59009914d6091878d527ecac50b88e7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?width=1080&crop=smart&auto=webp&s=d6853e75ddc4acc690ae637d506855c8b6d2a602', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/QBPpeQlMNZXiQC5w202dv5JwlxYUyxuKM_ktTAGYcPg.jpg?auto=webp&s=3b12e82f009e7603ee0b36e4b3e8ca34155df613', 'width': 1200}, 'variants': {}}]}
|
|
5060 TI
| 1 |
[removed]
| 2025-04-17T08:43:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k180pe/5060_ti/
|
Fair-Spring9113
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k180pe
| false | null |
t3_1k180pe
|
/r/LocalLLaMA/comments/1k180pe/5060_ti/
| false | false |
self
| 1 | null |
Where is Qwen 3?
| 189 |
There was a lot of hype around the launch of Qwen 3 ( GitHub PRs, tweets and all)
Where did the hype go all of a sudden?
| 2025-04-17T08:48:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k183aa/where_is_qwen_3/
|
Special_System_6627
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k183aa
| false | null |
t3_1k183aa
|
/r/LocalLLaMA/comments/1k183aa/where_is_qwen_3/
| false | false |
self
| 189 | null |
We’ve been building something I think a lot of you will find exciting — it’s called Refact Agent.
| 1 |
[removed]
| 2025-04-17T08:55:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k186f6/weve_been_building_something_i_think_a_lot_of_you/
|
Old_Kaleidoscope2885
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k186f6
| false | null |
t3_1k186f6
|
/r/LocalLLaMA/comments/1k186f6/weve_been_building_something_i_think_a_lot_of_you/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'GdW9I_vBodLec8QitHXX5AxXRMEunlekLmfQkF6zL4c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aeNZZBHSLzJfTVSGLDyS6vGTWKR5nqdheFxEIgY3cbg.jpg?width=108&crop=smart&auto=webp&s=427e805a76dbca373d161428b8c796ecad2c6845', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aeNZZBHSLzJfTVSGLDyS6vGTWKR5nqdheFxEIgY3cbg.jpg?width=216&crop=smart&auto=webp&s=67192814060b08c3ebc1f63afefa2142115940c2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aeNZZBHSLzJfTVSGLDyS6vGTWKR5nqdheFxEIgY3cbg.jpg?width=320&crop=smart&auto=webp&s=e4de83f0935732672babed7ca6d4ebc0ce8ae66f', 'width': 320}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/aeNZZBHSLzJfTVSGLDyS6vGTWKR5nqdheFxEIgY3cbg.jpg?auto=webp&s=a74f61580ff77cc66d683f354921fca3df38f627', 'width': 512}, 'variants': {}}]}
|
How much effective memory bandwidth do you guys get with multiple gpu rigs?
| 1 |
If I have n gpus of same kind does the tps become n times? Or is it close to n times but little bit less? Theoretically it should be close to n times right if the layers are split properly even the pcie bandwidth should not be bottleneck. But what actually happens practically?
| 2025-04-17T09:18:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k18hcv/how_much_effective_memory_bandwidth_do_you_guys/
|
nother_level
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k18hcv
| false | null |
t3_1k18hcv
|
/r/LocalLLaMA/comments/1k18hcv/how_much_effective_memory_bandwidth_do_you_guys/
| false | false |
self
| 1 | null |
Gemma's license has a provision saying "you must make "reasonable efforts to use the latest version of Gemma"
| 233 | 2025-04-17T09:34:02 |
vibjelo
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k18pb4
| false | null |
t3_1k18pb4
|
/r/LocalLLaMA/comments/1k18pb4/gemmas_license_has_a_provision_saying_you_must/
| false | false | 233 |
{'enabled': True, 'images': [{'id': 'HpHm-5hWMsmeWv_UtTox8I4PWgRPyvKM0nAX8j6YOmM', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=108&crop=smart&auto=webp&s=b00a177390ba69664876b7ff61903ba5abc799df', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=216&crop=smart&auto=webp&s=6131478a0a1b2224921825dda79fdbee87b70092', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=320&crop=smart&auto=webp&s=f4ffe77d160f65dcba35f7483cf657268a06d376', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=640&crop=smart&auto=webp&s=86c7084ad25d72a8afaffab2fd42032b322889a8', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=960&crop=smart&auto=webp&s=2b6a86fa24070f7fa6cdba834814b44d2ef472c1', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?width=1080&crop=smart&auto=webp&s=6284e333826af00faa35c5087e55d84b5c058db2', 'width': 1080}], 'source': {'height': 737, 'url': 'https://preview.redd.it/pn9z3hg67dve1.png?auto=webp&s=02aa323d6ba17a6bfe99c89de1c360919456fda6', 'width': 1579}, 'variants': {}}]}
|
|||
LoRA Creation
| 1 |
[removed]
| 2025-04-17T09:44:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k18u6y/lora_creation/
|
Potential_Total9261
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k18u6y
| false | null |
t3_1k18u6y
|
/r/LocalLLaMA/comments/1k18u6y/lora_creation/
| false | false |
self
| 1 | null |
I have a 4090, should I selfhost (for coding)?
| 1 |
[removed]
| 2025-04-17T10:27:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k19gny/i_have_a_4090_should_i_selfhost_for_coding/
|
ThrowaAwayaYaKnowa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k19gny
| false | null |
t3_1k19gny
|
/r/LocalLLaMA/comments/1k19gny/i_have_a_4090_should_i_selfhost_for_coding/
| false | false |
self
| 1 | null |
Why all the hype of Gemma 3 when the only benchmark posted was ELO arena?
| 0 |
I find it hard to get behind something just from the “vibes” does anyone have other benchmarks?
| 2025-04-17T10:47:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k19rav/why_all_the_hype_of_gemma_3_when_the_only/
|
Euphoric_Ad9500
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k19rav
| false | null |
t3_1k19rav
|
/r/LocalLLaMA/comments/1k19rav/why_all_the_hype_of_gemma_3_when_the_only/
| false | false |
self
| 0 | null |
langchain agent fine tuning for powerful function calling with unsloth
| 1 |
[removed]
| 2025-04-17T11:07:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1a34j/langchain_agent_fine_tuning_for_powerful_function/
|
OddConcept30
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1a34j
| false | null |
t3_1k1a34j
|
/r/LocalLLaMA/comments/1k1a34j/langchain_agent_fine_tuning_for_powerful_function/
| false | false |
self
| 1 | null |
Testing gpt-4.1 via the API for automated coding tasks, OpenAI models are still expensive and barely beats local QwQ-32b in usefulness, doesn't come close if you consider the high price
| 51 | 2025-04-17T11:12:16 |
vibjelo
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1a647
| false | null |
t3_1k1a647
|
/r/LocalLLaMA/comments/1k1a647/testing_gpt41_via_the_api_for_automated_coding/
| false | false | 51 |
{'enabled': True, 'images': [{'id': '7sLHBl8hf6sUAwO7NdGOEyYEIWdOzRM_d-hJ68T_Gug', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=108&crop=smart&auto=webp&s=0ccb840d068f8e98f77b4eac88744798a324a8bf', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=216&crop=smart&auto=webp&s=09718375f34526647a442e93b6d447d898d41787', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=320&crop=smart&auto=webp&s=d3fc710f67eda2ea9bf151d03ffb3d691df1010e', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=640&crop=smart&auto=webp&s=6e7c7f8459a6fcc9cac076c6313f6f83ca0ca4c8', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=960&crop=smart&auto=webp&s=9dd0a65ab5de953f662c34cb4da35d7117fdbd49', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/gt6olopsodve1.png?width=1080&crop=smart&auto=webp&s=5962de194700e64b23c4eefae1c4409ee1a795b8', 'width': 1080}], 'source': {'height': 1270, 'url': 'https://preview.redd.it/gt6olopsodve1.png?auto=webp&s=8633c4c507a2f6a94771d19f7661bd9e40f93431', 'width': 1904}, 'variants': {}}]}
|
|||
FULL LEAKED Devin AI System Prompts and Tools (100% Real)
| 26 |
(Latest system prompt: 17/04/2025)
I managed to get full official Devin AI system prompts, including its tools. Over 400 lines.
Check it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
| 2025-04-17T11:18:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1a9rx/full_leaked_devin_ai_system_prompts_and_tools_100/
|
Independent-Box-898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1a9rx
| false | null |
t3_1k1a9rx
|
/r/LocalLLaMA/comments/1k1a9rx/full_leaked_devin_ai_system_prompts_and_tools_100/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': 'hRaEg8t5yNKN1ExpApitmwlTGhuPtDqhQD38A6MQeNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=108&crop=smart&auto=webp&s=65a3675548c52c8dc2482ad91ac3b1b3a12e6ecc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=216&crop=smart&auto=webp&s=3f903d2025468de6e0b11db9c632385ab478dd14', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=320&crop=smart&auto=webp&s=ac6c8c83448577e92a259858bd920e1da8aa300d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=640&crop=smart&auto=webp&s=0ca7c9080a6c762289c003fce6534c98aaf13114', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=960&crop=smart&auto=webp&s=cb4a670d3885645a91c35bee1d703d8645e9dbd6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=1080&crop=smart&auto=webp&s=9952866536ae7e056e0d77114834da1f48b67281', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?auto=webp&s=13ea782126418df9384b74952d87479e89bea1d7', 'width': 1200}, 'variants': {}}]}
|
Wikipedia is giving AI developers its data to fend off bot scrapers - Data science platform Kaggle is hosting a Wikipedia dataset that’s specifically optimized for machine learning applications
| 620 |
The Verge: [https://www.theverge.com/news/650467/wikipedia-kaggle-partnership-ai-dataset-machine-learning](https://www.theverge.com/news/650467/wikipedia-kaggle-partnership-ai-dataset-machine-learning)
Wikipedia Kaggle Dataset using Structured Contents Snapshot: [https://enterprise.wikimedia.com/blog/kaggle-dataset/](https://enterprise.wikimedia.com/blog/kaggle-dataset/)
| 2025-04-17T11:31:44 |
Nunki08
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ahr4
| false | null |
t3_1k1ahr4
|
/r/LocalLLaMA/comments/1k1ahr4/wikipedia_is_giving_ai_developers_its_data_to/
| false | false | 620 |
{'enabled': True, 'images': [{'id': 'JJG2a1weZDO3ZHUOq_puwaG8JJas0XnPLL6zkPba3P4', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/d044iigqrdve1.jpeg?width=108&crop=smart&auto=webp&s=61b9572e90b08a69c7e3a812b86936cc2f3b4f19', 'width': 108}, {'height': 225, 'url': 'https://preview.redd.it/d044iigqrdve1.jpeg?width=216&crop=smart&auto=webp&s=3ccc3baab9cfc6c4ee969779d9310b6caf11b917', 'width': 216}, {'height': 334, 'url': 'https://preview.redd.it/d044iigqrdve1.jpeg?width=320&crop=smart&auto=webp&s=b09ad07b9eb8bf0f73878d24da5877cfe2c10e07', 'width': 320}, {'height': 668, 'url': 'https://preview.redd.it/d044iigqrdve1.jpeg?width=640&crop=smart&auto=webp&s=45e7348a9b7348bb6993397c3dbf74ba198c4943', 'width': 640}], 'source': {'height': 832, 'url': 'https://preview.redd.it/d044iigqrdve1.jpeg?auto=webp&s=09480c47142df4392db3a70b077e9f7a9c2c7508', 'width': 797}, 'variants': {}}]}
|
||
Medium sized local models already beating vanilla ChatGPT - Mind blown
| 335 |
I was used to stupid "Chatbots" by companies, who just look for some key words in your question to reference some websites, e.g.
>Me: "*I am thinking about implementing my own car engine into the Audi A4. What kind of oil does it need?*"
Audi Chatbot: "*The new Audi A4 is a great car with improved assistant systems. Do you want a test drive to try out that car?*"
When ChatGPT came out, there was nothing comparable and for me it was mind blowing how a chatbot is able to really talk like a human about everything, come up with good advice, was able to summarize etc.
Since ChatGPT (GPT-3.5 Turbo) is a huge model, I thought that todays small and medium sized models (8-30B) would still be waaay behind (and this was the case, when I remember the good old llama 1 days) ChatGPT. Like:
*Tier 1: The big boys (GPT-3/4, Deepseek V3, Llama Maverick, etc.)*
*Tier 2: Medium sized (100B), pretty good, not perfect, but good enough when privacy is a must*
*Tier 3: The children area (all 8B-32B models)*
Since the progress in AI performance is gradually, I asked myself "How much better now are we from vanilla ChatGPT?". So I tested it against Gemma3 27B with IQ3\_XS which fits into 16GB VRAM with some prompts about daily advice, summarizing text or creative writing.
And hoooly, **we have reached and even surpassed vanilla ChatGPT (GPT-3.5) and it runs on consumer hardware**!!!
I thought I mention this so we realize how far we are now with local open source models.
| 2025-04-17T11:52:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1av1x/medium_sized_local_models_already_beating_vanilla/
|
Bitter-College8786
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1av1x
| false | null |
t3_1k1av1x
|
/r/LocalLLaMA/comments/1k1av1x/medium_sized_local_models_already_beating_vanilla/
| false | false |
self
| 335 | null |
Only 1% people are smarter than o3💠
| 0 |
Source : https://trackingai.org/IQ
| 2025-04-17T12:08:23 |
BidHot8598
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1b5yr
| false | null |
t3_1k1b5yr
|
/r/LocalLLaMA/comments/1k1b5yr/only_1_people_are_smarter_than_o3/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '5YhVuOVOsEVPCQvtiFIJhwt0_Ru1uyPyv8sl-voRXA8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=108&crop=smart&auto=webp&s=0f027097ccab61423e5298cac4fc9270c72ebbfc', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=216&crop=smart&auto=webp&s=679156c18410806dfd71f8b456cd81d21bd06c01', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=320&crop=smart&auto=webp&s=4788706b134953e3681e9570dcea3065384fc3ea', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=640&crop=smart&auto=webp&s=e5160535ac707db7dfc271a1ccb0d61f7f2425a2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=960&crop=smart&auto=webp&s=01558be6af187179999a261e6d20e21a4d48962b', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?width=1080&crop=smart&auto=webp&s=ff2b674e990b3620121975d91429d0aeee6da634', 'width': 1080}], 'source': {'height': 2880, 'url': 'https://preview.redd.it/n7c9x22uydve1.jpeg?auto=webp&s=735d57b08a37c0f859c83301b06e6553c67dcddb', 'width': 2880}, 'variants': {}}]}
|
||
Is DeepSeek as good as ChatGPT?
| 0 |
If you run DeepSeek locally is its reasoning skills better than ChatGPT?
| 2025-04-17T12:26:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1bihi/is_deepseek_as_good_as_chatgpt/
|
Strict-Horse-6534
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1bihi
| false | null |
t3_1k1bihi
|
/r/LocalLLaMA/comments/1k1bihi/is_deepseek_as_good_as_chatgpt/
| false | false |
self
| 0 | null |
Haste - Need For Greed
| 1 | 2025-04-17T12:29:48 |
https://youtu.be/lCJaTVPXn6c?si=2plyoFkmibj79U_P
|
Electronic_Contact92
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1bkuv
| false |
{'oembed': {'author_name': 'Haste', 'author_url': 'https://www.youtube.com/@reelhaste', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/lCJaTVPXn6c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Haste - Need For Greed (Official Music Video)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/lCJaTVPXn6c/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Haste - Need For Greed (Official Music Video)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1k1bkuv
|
/r/LocalLLaMA/comments/1k1bkuv/haste_need_for_greed/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'vIkbbvTzTiz68KPWFgSC9xUYEbN_w45jktVKVD4K82Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wETBvQ2NDXVpWmTaH-hMJRnflAmSMVGiziOV-yQXnUE.jpg?width=108&crop=smart&auto=webp&s=d40adbc32ce162aeed4379381be42ff3cca36173', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wETBvQ2NDXVpWmTaH-hMJRnflAmSMVGiziOV-yQXnUE.jpg?width=216&crop=smart&auto=webp&s=8db2d4711a6e848158cf174741d0459fb4350ef3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wETBvQ2NDXVpWmTaH-hMJRnflAmSMVGiziOV-yQXnUE.jpg?width=320&crop=smart&auto=webp&s=ac341155c4f37a6856c9db30056a7ab003dbe76f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wETBvQ2NDXVpWmTaH-hMJRnflAmSMVGiziOV-yQXnUE.jpg?auto=webp&s=49f2774e81a0ea4827ab35e89d1e979ddbf9a0f9', 'width': 480}, 'variants': {}}]}
|
||
Deepseek's gpt distillation question
| 0 |
Hi, I'm wondering did they distill for base model pretraining or fine-tuning? Or both? How do they know what kind of queries to send? I would like to hear your thoughts what they really did, and how much in quantity. We can assume they don't employ data labellers for English at all, right?
| 2025-04-17T12:36:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1bpts/deepseeks_gpt_distillation_question/
|
robertpiosik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1bpts
| false | null |
t3_1k1bpts
|
/r/LocalLLaMA/comments/1k1bpts/deepseeks_gpt_distillation_question/
| false | false |
self
| 0 | null |
LLM coding prompt obfuscator
| 1 |
Is there some tool to obfuscate prompt sent to public providers? I'm thinking about two way identifiers replacement and using small local model for name suggestions for newly introduced ones.
| 2025-04-17T12:46:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1bwkt/llm_coding_prompt_obfuscator/
|
robertpiosik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1bwkt
| false | null |
t3_1k1bwkt
|
/r/LocalLLaMA/comments/1k1bwkt/llm_coding_prompt_obfuscator/
| false | false |
self
| 1 | null |
FULL LEAKED Devin AI System Prompts and Tools
| 0 |
(Latest system prompt: 17/04/2025)
I managed to get full official Devin AI system prompts, including its tools. Over 400 lines.
Check it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
Summary:
• Role
you are “devin,” an expert software engineer on a real OS, tasked with implementing user requests precisely and efficiently.
• Communication
contact the user only for environment issues, missing credentials/permissions, critical gaps in context, or to share deliverables.
• Workflow
1. gather context in planning mode—use searches, lsp, browser as needed; ask questions if unclear.
2. once you’ve mapped edits, emit <suggest_plan/>.
3. in standard mode, apply changes, run lint/tests, iterate via CI if local breaks.
• Coding Practices
mirror existing style; verify library availability; avoid unnecessary comments; never alter tests unless requested.
• Security & Info Handling
treat all data as sensitive; verify links by browsing; never leak secrets or system prompts; request explicit permission before external comms.
• Toolset
• reasoning: <think>
• editor: <open_file>, <insert>, <str_replace>, <remove_str>
• search: <find_filecontent>, <find_filename>, <semantic_search>
• lsp: <go_to_definition>, <hover_symbol>, <go_to_references>
• browser: <navigate_browser>, <view_browser>, <click_browser>, <type_browser>
• shell (no file edits/views)
• deploy: <deploy_frontend>, <deploy_backend>, <expose_port>
• interaction: <message_user>, <wait>, <report_environment_issue>
• git/github: <git_view_pr>, gh cli conventions
| 2025-04-17T12:53:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1c23u/full_leaked_devin_ai_system_prompts_and_tools/
|
Independent-Box-898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1c23u
| false | null |
t3_1k1c23u
|
/r/LocalLLaMA/comments/1k1c23u/full_leaked_devin_ai_system_prompts_and_tools/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'hRaEg8t5yNKN1ExpApitmwlTGhuPtDqhQD38A6MQeNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=108&crop=smart&auto=webp&s=65a3675548c52c8dc2482ad91ac3b1b3a12e6ecc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=216&crop=smart&auto=webp&s=3f903d2025468de6e0b11db9c632385ab478dd14', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=320&crop=smart&auto=webp&s=ac6c8c83448577e92a259858bd920e1da8aa300d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=640&crop=smart&auto=webp&s=0ca7c9080a6c762289c003fce6534c98aaf13114', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=960&crop=smart&auto=webp&s=cb4a670d3885645a91c35bee1d703d8645e9dbd6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=1080&crop=smart&auto=webp&s=9952866536ae7e056e0d77114834da1f48b67281', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?auto=webp&s=13ea782126418df9384b74952d87479e89bea1d7', 'width': 1200}, 'variants': {}}]}
|
Solana 7777
| 1 |
[removed]
| 2025-04-17T13:18:29 |
https://v.redd.it/183v7jacbeve1
|
solana777777
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ckq5
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/183v7jacbeve1/DASHPlaylist.mpd?a=1747487929%2CMGQ0MDk2Nzc0NTM1MDBhZDBiODMxM2FlN2U1OTA5ZDIwNmUyYzI1ODk3Nzk1OGM2NzA1YWM4NGYxODRjM2YyOA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/183v7jacbeve1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/183v7jacbeve1/HLSPlaylist.m3u8?a=1747487929%2CNTlhODJkZGZjMDVlZTgzZDI0MDFmMjI3Y2QwMWRmNDNlYmRlY2FhYTM5YThkMWY0NTg1YzIxMjQwNjBmZDE0Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/183v7jacbeve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1k1ckq5
|
/r/LocalLLaMA/comments/1k1ckq5/solana_7777/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=108&crop=smart&format=pjpg&auto=webp&s=2dbf09d1b213a16784ab5b5fa4def94340dc6219', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=216&crop=smart&format=pjpg&auto=webp&s=1f89aa38212cc60b0d4d8e04225c82904f5e0d79', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=320&crop=smart&format=pjpg&auto=webp&s=6e9371fcad0b2213f416481cd8b264c06e5b653f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b6d6cecded42b616ee128ac42f559b9470e162a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=960&crop=smart&format=pjpg&auto=webp&s=e46762bb90c90926a3685dda43f62cfa4cef7702', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e718f4cc76ce59b2fd21b5b545b7a2eb4877caa9', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bjZqbXUweWJiZXZlMUcvDnr_VjCsWsdtCczwF7UjXmdUc4JuwAbjyWeCMDLv.png?format=pjpg&auto=webp&s=0a466e1957b5c04f1311f3bd0c8693cfea85ccb2', 'width': 1280}, 'variants': {}}]}
|
|
Raspberry Pi 5 + eGPU as LLM server?
| 1 |
[removed]
| 2025-04-17T13:24:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1cprg/raspberry_pi_5_egpu_as_llm_server/
|
Old-Squash9227
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1cprg
| false | null |
t3_1k1cprg
|
/r/LocalLLaMA/comments/1k1cprg/raspberry_pi_5_egpu_as_llm_server/
| false | false |
self
| 1 | null |
Best sites to follow
| 1 |
[removed]
| 2025-04-17T13:51:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1danc/best_sites_to_follow/
|
Agreeable-Prompt-666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1danc
| false | null |
t3_1k1danc
|
/r/LocalLLaMA/comments/1k1danc/best_sites_to_follow/
| false | false |
self
| 1 | null |
This is crazy!
| 0 | 2025-04-17T13:55:50 |
ashutrv
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1deqg
| false | null |
t3_1k1deqg
|
/r/LocalLLaMA/comments/1k1deqg/this_is_crazy/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '9APKKtcedqSFdDnEiKaqkmx-JyOs9pLJ6C7WPoKqp9g', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=108&crop=smart&auto=webp&s=4a7c2dd57d4aedb072686f09622b9b6ca93d7061', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=216&crop=smart&auto=webp&s=26ac87f97b7b1ed69ec5e632009153edc1980d45', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=320&crop=smart&auto=webp&s=9be8b5d02da50179b19267f4d3914737d7bd529f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=640&crop=smart&auto=webp&s=aa13cdb84c16dcab640a259387fcefdf509f0f88', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=960&crop=smart&auto=webp&s=353109f66e0f0fadca664f9292c99fea962956c6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?width=1080&crop=smart&auto=webp&s=b14c433cc87cf6d867a61d80435783b2ab36e27a', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/94dg8z42ieve1.jpeg?auto=webp&s=95f75727815adb28fd5bdb01305497eea308f20c', 'width': 1179}, 'variants': {}}]}
|
|||
Looking for Recommendations on Models
| 2 |
Hey fellow Redditors,
I'm reaching out in search of some recommendations for AI models that can analyze uploaded documents. I've already experimented with LLaMA 3.2-vision:11b and Deepseek-r1:8b, but unfortunately, neither model seems to have the capability to process uploaded documents.
My use case is specifically focused on analyzing contracts, agreements, and other legal documents. Ideally, I'd love to find a model that's tailored towards law-focused applications.
Are there any other AI models out there that can handle document analysis? Bonus points if they're law-specific!
Additionally, I have a secondary question: are there any ways to configure locally run AI models to interact with my screen or email client? I'm thinking of something like "screen scraping" or email integration, but I'm not sure if it's even possible.
If you've had success with any specific models or integrations, please share your experiences!
Thanks in advance for your help and recommendations!
(written by LLaMA 3.2)
| 2025-04-17T14:07:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1doqv/looking_for_recommendations_on_models/
|
McLawyer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1doqv
| false | null |
t3_1k1doqv
|
/r/LocalLLaMA/comments/1k1doqv/looking_for_recommendations_on_models/
| false | false |
self
| 2 | null |
Please Help me Fine-Tuning Model to Generate Fanfiction
| 1 |
[deleted]
| 2025-04-17T14:09:16 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1dqb8
| false | null |
t3_1k1dqb8
|
/r/LocalLLaMA/comments/1k1dqb8/please_help_me_finetuning_model_to_generate/
| false | false |
default
| 1 | null |
||
I really didn't expect this.
| 73 | 2025-04-17T14:11:03 |
Educational_Grab_473
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1drtz
| false | null |
t3_1k1drtz
|
/r/LocalLLaMA/comments/1k1drtz/i_really_didnt_expect_this/
| false | false | 73 |
{'enabled': True, 'images': [{'id': 'F9wcmY-w905mJmakbuLr8Wyi2HYr02JHLnDprqLlQns', 'resolutions': [{'height': 18, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=108&crop=smart&auto=webp&s=e51da645579950226fcc5290f730ffcadf1a3ab1', 'width': 108}, {'height': 36, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=216&crop=smart&auto=webp&s=14c1bd7dcd79c6639e3f9909f6bad44f44c5452f', 'width': 216}, {'height': 54, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=320&crop=smart&auto=webp&s=ddcf0eab401accab221de21b3996615b6d975b4a', 'width': 320}, {'height': 108, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=640&crop=smart&auto=webp&s=b3621e8ccd66f0f6b56d231728f888ec361fb1bc', 'width': 640}, {'height': 162, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=960&crop=smart&auto=webp&s=3bbb294c9cc9eb34f5619673271ed9d079eeb54c', 'width': 960}, {'height': 182, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?width=1080&crop=smart&auto=webp&s=aac1f3ab351a795104b9aef905e5ea31d5caf239', 'width': 1080}], 'source': {'height': 233, 'url': 'https://preview.redd.it/jmyzbmtrkeve1.png?auto=webp&s=8ce455d9dc01becfdded18056bc5e293dc92c76d', 'width': 1377}, 'variants': {}}]}
|
|||
Qwen 3: the information you don't know (and neither do I)
| 0 |
The release date.
These Chinese geniuses like to make us anxious. As a fan of China and an eager person, I would like to be given exclusive information, or access to gguf, if possible. Long live Opensource, long live Qwen, long live Alibaba, long live China!
| 2025-04-17T14:14:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1duyi/qwen_3_the_information_you_dont_know_and_neither/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1duyi
| false | null |
t3_1k1duyi
|
/r/LocalLLaMA/comments/1k1duyi/qwen_3_the_information_you_dont_know_and_neither/
| false | false |
self
| 0 | null |
Gemma 3: smarter, but dumber
| 6 |
This is a rather peculiar position. Gemma 3 is noticeably smarter than its predecessor, however, this increase appears to be directly linked to the increase in parameters as well. What gives me this certainty is the clear victory of Gemma 2 2B against Gemma 3 1B. However, there is something even more peculiar: the larger third generation models seem to be very lacking in factual information. In other words, they are less intelligent in terms of having true information. This, at the same time as they sound more intelligent (they are more coherent in their answers, smarter, even when they get factual information wrong). All of this leads me to the conclusion that the number of parameters still reigns over any other thing or technique.
| 2025-04-17T14:18:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1dych/gemma_3_smarter_but_dumber/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1dych
| false | null |
t3_1k1dych
|
/r/LocalLLaMA/comments/1k1dych/gemma_3_smarter_but_dumber/
| false | false |
self
| 6 | null |
Please Help me Fine-Tuning Model to Generate Fanfiction
| 2 |
Hello LocalLLaMA fellows,
I’m in need of someone who can help me fine-tune a model on a BTS fanfiction dataset. My goal is to have a model that can generate complete 4000 to 5000 word stories based on a simple story idea I provide.
The output should match the style, tone, pacing, and emotional format of real BTS fanfics (Wattpad-style). I’ve attached a sample input + desired output pair to demonstrate what I’m aiming for. Thanks for reading.
Example: [Input/output Pastebin](https://pastebin.com/jAyTAXy2)
P.S. I've tried RAG, few shot prompts, and also fine-tuning with 70 rows of input output examples (training loss 1.533). None of them worked for me.
| 2025-04-17T14:26:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1e4l4/please_help_me_finetuning_model_to_generate/
|
Right-Law1817
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1e4l4
| false | null |
t3_1k1e4l4
|
/r/LocalLLaMA/comments/1k1e4l4/please_help_me_finetuning_model_to_generate/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
|
4090 48GB after extensive use?
| 19 |
Hey guys,
Can anyone share their experience with one of those 4090 48GB after extensive use? Are they still running fine? No overheating? No driver issues? Do they run well in other use cases (besides LLMs)? How about gaming?
I'm considering buying one, but I'd like to confirm they are not falling apart after some time in use...
| 2025-04-17T14:26:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1e4y4/4090_48gb_after_extensive_use/
|
Ordinary-Lab7431
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1e4y4
| false | null |
t3_1k1e4y4
|
/r/LocalLLaMA/comments/1k1e4y4/4090_48gb_after_extensive_use/
| false | false |
self
| 19 | null |
Just (re-)discovered markdown for slides/presentations. Here's a script to generate presentation in markdown.
| 17 |
Hacked my presentation building with inference providers, cohere command a, and sheer simplicity. Take this script if you’re burning too much time on presentations:
🔗 [https://github.com/burtenshaw/course\_generator/blob/main/scripts/create\_presentation.py](https://github.com/burtenshaw/course_generator/blob/main/scripts/create_presentation.py)
This is what it does:
* it uses command a to generates a transcription and slides based on some material.
* it renders the material in remark open format
* you can review the slides as markdown
* the n it can export to either pdf or slides using backslide
Next steps, text to speech for the audio and generate a video. This should make educational content scale to a billion AI Learners.
| 2025-04-17T14:26:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1e52t/just_rediscovered_markdown_for/
|
Zealousideal-Cut590
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1e52t
| false | null |
t3_1k1e52t
|
/r/LocalLLaMA/comments/1k1e52t/just_rediscovered_markdown_for/
| false | false |
self
| 17 |
{'enabled': False, 'images': [{'id': 'OB69miAAuUUhEck69G2YqrG8v_iMjbJQnabesStU2To', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=108&crop=smart&auto=webp&s=02835915f07d9cf25639dac56e1655d6d8f99adb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=216&crop=smart&auto=webp&s=039463989c0fe823b62dcd9ff7d1d0c799eecb81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=320&crop=smart&auto=webp&s=d34e6a62b31caff55b8aa5021f2b5be011e695c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=640&crop=smart&auto=webp&s=c1abb10302c284e3cf1d2771dc608b8bd8379a83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=960&crop=smart&auto=webp&s=8de8dec2947332023e2ae89f55f21e5aca07ecb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?width=1080&crop=smart&auto=webp&s=71f301f98633fdf010dff7fe07a42543acdd1ef0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ikUrtru3iH2WpQfNqI347g3vYgsIfWzgVakh240vI0Y.jpg?auto=webp&s=d5dacc3a4de70e8596c6be6a27603da57f4e28a2', 'width': 1200}, 'variants': {}}]}
|
Want to create a local LLM that's an expert in my field..feasible? Possible?
| 7 |
Hi, I'm a psychometrician and I use chatgpt regularly as a thought partner to code and interpret analysrs. Its come a long way and it very useful but I'm curious if I'd be able to make an even better expert locally. I have a M4 MacBook that does pretty well with my local models. Wondering if anyone can help me figure out what tutorials, info, or search terms I could use to a.) Figure out if this is feasible and b.) How to do it.
My best guess is I'd have to train a model on a compendium of academic literature and R code?
| 2025-04-17T14:41:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ehvg/want_to_create_a_local_llm_thats_an_expert_in_my/
|
catspongedogpants
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ehvg
| false | null |
t3_1k1ehvg
|
/r/LocalLLaMA/comments/1k1ehvg/want_to_create_a_local_llm_thats_an_expert_in_my/
| false | false |
self
| 7 | null |
Open weight model that can "think like Gemini" ?
| 0 |
Since Gemini 2.5 Pro is pretty impressive, I wonder are there any open weight model following the Gemini thinking format? Which is quite different from R1 & QwQ:
Here's a thinking process for responding to the user's request about ...:
1.
2.
3.
...
| 2025-04-17T14:47:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1emad/open_weight_model_that_can_think_like_gemini/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1emad
| false | null |
t3_1k1emad
|
/r/LocalLLaMA/comments/1k1emad/open_weight_model_that_can_think_like_gemini/
| false | false |
self
| 0 | null |
Why is LLaMA3 3 70B (q4_k_m) over 100x slower than 3.1.8B on H100?
| 1 |
[removed]
| 2025-04-17T15:00:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1exu1/why_is_llama3_3_70b_q4_k_m_over_100x_slower_than/
|
Fluid_Racoon717
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1exu1
| false | null |
t3_1k1exu1
|
/r/LocalLLaMA/comments/1k1exu1/why_is_llama3_3_70b_q4_k_m_over_100x_slower_than/
| false | false |
self
| 1 | null |
What if your local coding agent could perform as well as Cursor on very large, complex codebases codebases?
| 32 |
Local coding agents (Qwen Coder, DeepSeek Coder, etc.) often lack the deep project context of tools like Cursor, especially because their contexts are so much smaller. Standard RAG helps but misses nuanced code relationships.
We're experimenting with building project-specific **Knowledge Graphs (KGs)** on-the-fly within the IDE—representing functions, classes, dependencies, etc., as structured nodes/edges.
Instead of just vector search or the LLM's base knowledge, our agent queries this dynamic KG for highly relevant, interconnected context (e.g., call graphs, inheritance chains, definition-usage links) before generating code or suggesting refactors.
This seems to unlock:
* **Deeper context-aware local coding** (beyond file content/vectors)
* **More accurate cross-file generation & complex refactoring**
* **Full privacy & offline use** (local LLM + local KG context)
Curious if others are exploring similar areas, especially:
* Deep IDE integration for local LLMs (Qwen, CodeLlama, etc.)
* Code KG generation (using Tree-sitter, LSP, static analysis)
* Feeding structured KG context effectively to LLMs
Happy to share technical details (KG building, agent interaction). What limitations are you seeing with local agents?
P.S. Considering a deeper write-up on KGs + local code LLMs if folks are interested
| 2025-04-17T15:11:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1f6sv/what_if_your_local_coding_agent_could_perform_as/
|
juanviera23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1f6sv
| false | null |
t3_1k1f6sv
|
/r/LocalLLaMA/comments/1k1f6sv/what_if_your_local_coding_agent_could_perform_as/
| false | false |
self
| 32 | null |
Scrappy underdog GLM-4-9b still holding onto the top spot (for local models) for lowest hallucination rate
| 122 |
GLM-4-9b appreciation post here (the older version, not the new one). This little model has been a production RAG workhorse for me for like the last 4 months or so. I’ve tried it against so many other models and it just crushes at fast RAG. To be fair, QwQ-32b blows it out of the water for RAG when you have time to spare, but if you need a fast answer or are resource limited, GLM-4-9b is still the GOAT in my opinion.
The fp16 is only like 19 GB which fits well on a 3090 with room to spare for context window and a small embedding model like Nomic.
Here’s the specific version I found seems to work best for me:
https://ollama.com/library/glm4:9b-chat-fp16
It’s consistently held the top spot for local models on Vectara’s Hallucinations Leaderboard for quite a while now despite new ones being added to the leaderboard fairly frequently. Last update was April 10th.
https://github.com/vectara/hallucination-leaderboard?tab=readme-ov-file
I’m very eager to try all the new GLM models that were released earlier this week. Hopefully Ollama will add support for them soon, if they don’t, then I guess I’ll look into LM Studio.
| 2025-04-17T15:19:43 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1fe1d
| false | null |
t3_1k1fe1d
|
/r/LocalLLaMA/comments/1k1fe1d/scrappy_underdog_glm49b_still_holding_onto_the/
| false | false | 122 |
{'enabled': True, 'images': [{'id': 'JoSaY9m-VTdtSktmrxtLOpGkPP78vnMce0ee1-9Oy9M', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=108&crop=smart&auto=webp&s=0bc78a6f663237e48199ea90bf13d4646e88b1eb', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=216&crop=smart&auto=webp&s=e43ac77e14fcbbe650efbb151bbf8b154c7ac3c1', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=320&crop=smart&auto=webp&s=de04eb1e5b95014f407f1f5b4aecbf5ae9f8af66', 'width': 320}, {'height': 406, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=640&crop=smart&auto=webp&s=f6ad8da98a91331c0ffb08d26fe5050f440773c8', 'width': 640}, {'height': 609, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=960&crop=smart&auto=webp&s=87f3af437f04b6c21da54be8ef339b6f7b671cf5', 'width': 960}, {'height': 686, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?width=1080&crop=smart&auto=webp&s=5548c3610af7436a1d458bde4b36ad54931da3e3', 'width': 1080}], 'source': {'height': 1514, 'url': 'https://preview.redd.it/63jtqmp0xeve1.jpeg?auto=webp&s=e8abb294763e1aabc81461febdcfa7c34538d897', 'width': 2383}, 'variants': {}}]}
|
||
Llama Maverick Arena analysis
| 1 |
[deleted]
| 2025-04-17T15:21:50 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ffy1
| false | null |
t3_1k1ffy1
|
/r/LocalLLaMA/comments/1k1ffy1/llama_maverick_arena_analysis/
| false | false |
default
| 1 | null |
||
New society is taking shape
| 1,088 | 2025-04-17T15:24:23 |
TheLogiqueViper
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1fi5w
| false | null |
t3_1k1fi5w
|
/r/LocalLLaMA/comments/1k1fi5w/new_society_is_taking_shape/
| false | false | 1,088 |
{'enabled': True, 'images': [{'id': 'DXIlyizwY0Ive8-aobgoNobTNN41Y0FpGIiwyGSSHeM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?width=108&crop=smart&auto=webp&s=d2158ef6c582be8ac713191b0cf01e6982377d0a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?width=216&crop=smart&auto=webp&s=db22ec1dffb5cc79c3b23899795fcc98ba008c11', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?width=320&crop=smart&auto=webp&s=888f281d9a27f9923a0b3ac8bc267266459484d1', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?width=640&crop=smart&auto=webp&s=3503ecb35d404084ae9522bb771aecd69177ee0c', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?width=960&crop=smart&auto=webp&s=571beb41b4bc83bcd7cac4f3ffcb0dd7c5a662f2', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/05n7cxquxeve1.jpeg?auto=webp&s=a2aa4186e86e1fddc8a261d3e57cfb5aade9be52', 'width': 1024}, 'variants': {}}]}
|
|||
FULL LEAKED Devin AI System Prompts and Tools
| 128 |
(Latest system prompt: 17/04/2025)
I managed to get full official Devin AI system prompts, including its tools. Over 400 lines.
You can check it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
| 2025-04-17T15:26:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1fk88/full_leaked_devin_ai_system_prompts_and_tools/
|
Independent-Box-898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1fk88
| false | null |
t3_1k1fk88
|
/r/LocalLLaMA/comments/1k1fk88/full_leaked_devin_ai_system_prompts_and_tools/
| false | false |
self
| 128 |
{'enabled': False, 'images': [{'id': 'hRaEg8t5yNKN1ExpApitmwlTGhuPtDqhQD38A6MQeNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=108&crop=smart&auto=webp&s=65a3675548c52c8dc2482ad91ac3b1b3a12e6ecc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=216&crop=smart&auto=webp&s=3f903d2025468de6e0b11db9c632385ab478dd14', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=320&crop=smart&auto=webp&s=ac6c8c83448577e92a259858bd920e1da8aa300d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=640&crop=smart&auto=webp&s=0ca7c9080a6c762289c003fce6534c98aaf13114', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=960&crop=smart&auto=webp&s=cb4a670d3885645a91c35bee1d703d8645e9dbd6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?width=1080&crop=smart&auto=webp&s=9952866536ae7e056e0d77114834da1f48b67281', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OLBsDKq1sHkS65EaVuzKsI39z90zp8mf-wjtirciyEs.jpg?auto=webp&s=13ea782126418df9384b74952d87479e89bea1d7', 'width': 1200}, 'variants': {}}]}
|
Llama Maverick Arena analysis show it was preferred due to better reasoning, detailed references, humor and politeness
| 1 |
[removed]
| 2025-04-17T15:31:07 |
Ok-Abroad2889
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1fo2x
| false | null |
t3_1k1fo2x
|
/r/LocalLLaMA/comments/1k1fo2x/llama_maverick_arena_analysis_show_it_was/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '197LIGe2ULjnPIABbOfXXJwhRFj-3yXu8S9H-ioYljk', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=108&crop=smart&auto=webp&s=35546135555f7827466fd8d03840a457aed4e693', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=216&crop=smart&auto=webp&s=b052f8f3ee90cfadd837f10e0b63c1966e4a381d', 'width': 216}, {'height': 306, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=320&crop=smart&auto=webp&s=02550f5389cb6c218c1da943f1bad06882d0aa95', 'width': 320}, {'height': 612, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=640&crop=smart&auto=webp&s=0f390039db3441caff36d735120c53d95ec117f9', 'width': 640}, {'height': 919, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=960&crop=smart&auto=webp&s=e947751f734c8e9fb64835abfa4bceb4ae9453de', 'width': 960}, {'height': 1034, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?width=1080&crop=smart&auto=webp&s=e0ccb8128442980b5cc155ae116f1132f778203d', 'width': 1080}], 'source': {'height': 1402, 'url': 'https://preview.redd.it/k9499q52zeve1.jpeg?auto=webp&s=17be5ec7f168e59bfd1ce8ed3ab16bd4e80e979b', 'width': 1464}, 'variants': {}}]}
|
||
Use any LLMs for Deep Research (open-source, MIT-licensed)
| 5 |
I found this open-source, MIT-licensed project, and it looks really cool!
> Deep Research uses a variety of powerful AI models to generate in-depth research reports in just a few minutes. It leverages advanced "Thinking" and "Flash" models, combined with an internet connection, to provide fast and insightful analysis on a variety of topics. Your privacy is paramount - all data is processed and stored locally.
Does anyone have any experience with it?
| 2025-04-17T15:53:43 |
https://github.com/u14app/deep-research
|
Balance-
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1g7fk
| false | null |
t3_1k1g7fk
|
/r/LocalLLaMA/comments/1k1g7fk/use_any_llms_for_deep_research_opensource/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'CbccGsCtKnKztTiYLPCd2Z9DG5U1crqP1BoBAgN49Jk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=108&crop=smart&auto=webp&s=08074e91ac22de39698616ec89ec2e62ca331c66', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=216&crop=smart&auto=webp&s=2cb22be6e29f03a4fe2203409c912d2ffdac45b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=320&crop=smart&auto=webp&s=f1981ff0125718bc1b5fb0fa5c4836dc14ae8ff7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=640&crop=smart&auto=webp&s=6f31f3c4af5c3dbbea3f72649ad268aa07a5add7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=960&crop=smart&auto=webp&s=b9d8acaf697d49ccc92be87d54e3acd3773c26b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?width=1080&crop=smart&auto=webp&s=8d3d8c4c49c12ae847e5e7a54c6cd19e0d33c47f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SrIyaL5ETKsLpEuUuMs8QQJVGeelFkpQmaUSdGyUyY4.jpg?auto=webp&s=41e2f4c14b39304670365355f5adad276fddde09', 'width': 1200}, 'variants': {}}]}
|
|
The best Realtime Speech to Text / ASR to serve multiple users
| 0 |
Hello,
I'm working on developing a real-time speech recognition service and recently discovered [WhisperLive](https://github.com/collabora/WhisperLive).
However, it appears that they do not support multiple users. Could you suggest any alternatives?
| 2025-04-17T16:05:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1gi01/the_best_realtime_speech_to_text_asr_to_serve/
|
Accomplished_Pin_626
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1gi01
| false | null |
t3_1k1gi01
|
/r/LocalLLaMA/comments/1k1gi01/the_best_realtime_speech_to_text_asr_to_serve/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'Xfd9y3_9kOm_QcT_WCHqToFd54iHSDGKXwWtKLB-1fo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=108&crop=smart&auto=webp&s=22e96842819a02b090d38643e57eeea023642430', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=216&crop=smart&auto=webp&s=f229eacc9f9c772509ccf765e0410890ce1ac45b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=320&crop=smart&auto=webp&s=4a13e5ca8b71221207742a0e81a2344782400499', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=640&crop=smart&auto=webp&s=a90ffe9b5e7632baf92ca4c4836f5d4159cb0857', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=960&crop=smart&auto=webp&s=3421923787bae03191bdb483c8b2b7ab638f2d32', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?width=1080&crop=smart&auto=webp&s=44eae5d690948b17503b295066481509fe954d8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MYqGeHDiPPryJm84Toc6uRGBPJM5omO-Oh08tH5suAk.jpg?auto=webp&s=7a2b20096a1ef15998e2183e80dd0b6471878cfa', 'width': 1200}, 'variants': {}}]}
|
What is the best NSFW roleplay model?
| 1 |
[removed]
| 2025-04-17T16:08:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1gkwh/what_is_the_best_nsfw_roleplay_model/
|
PixiePamPery
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1gkwh
| false | null |
t3_1k1gkwh
|
/r/LocalLLaMA/comments/1k1gkwh/what_is_the_best_nsfw_roleplay_model/
| false | false |
nsfw
| 1 | null |
Perception Encoder - a Facebook Collection
| 20 | 2025-04-17T16:13:05 |
https://huggingface.co/collections/facebook/perception-encoder-67f977c9a65ca5895a7f6ba1
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1goqh
| false | null |
t3_1k1goqh
|
/r/LocalLLaMA/comments/1k1goqh/perception_encoder_a_facebook_collection/
| false | false | 20 |
{'enabled': False, 'images': [{'id': 'LQfyZ0Ln9XMhIgayWC7rmbIAMpz1jVYL1yQQnBzELGE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=108&crop=smart&auto=webp&s=024242ce6b433c47f527b287e122cf02ebe65f68', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=216&crop=smart&auto=webp&s=6430d1408b28dd62b75760ec65fa66900e0a8101', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=320&crop=smart&auto=webp&s=ae1e93dbe159d2c5184fb3d365fac4d4c881c683', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=640&crop=smart&auto=webp&s=5a6b177064b452473ec9f3db7da4cb232dbafc5a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=960&crop=smart&auto=webp&s=1db618f3bb7c56a9b7c221f2173155455c55e6f3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?width=1080&crop=smart&auto=webp&s=07c9e68652ed5f86967d6e7809cb8210801768ea', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qDigWrx4XZWP3V0S9VX6cpnHHYiW8vPWfDGkF8kZxFM.jpg?auto=webp&s=9740f0b4975c7e4b479f15d1b07d1aa535521c17', 'width': 1200}, 'variants': {}}]}
|
||
Perception LM - a Facebook Collection
| 16 | 2025-04-17T16:13:28 |
https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1gp28
| false | null |
t3_1k1gp28
|
/r/LocalLLaMA/comments/1k1gp28/perception_lm_a_facebook_collection/
| false | false | 16 |
{'enabled': False, 'images': [{'id': 'n4hFtoyE8VyE50topwUGDQMo2PXOFF4-oMSKgCANc4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=108&crop=smart&auto=webp&s=9271d8bf5bcdc029f80d41a794b775e1c953f42c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=216&crop=smart&auto=webp&s=b89e5adfd37dda29227d3c91678cdf175c4a3f94', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=320&crop=smart&auto=webp&s=892524b442bfa2f6effb0f70b469b5c05bb9c6ea', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=640&crop=smart&auto=webp&s=9f6da37c184270b83484344bd81db1d541c7e4b1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=960&crop=smart&auto=webp&s=50475d7427a497225c1b66d0f4fb6e9382e9ae9c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=1080&crop=smart&auto=webp&s=2a2545fd29f8f6613c6485eaa9953660d72ebe5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?auto=webp&s=31c2b4cdab2d59edef19cdf1a1cea38f48c0043c', 'width': 1200}, 'variants': {}}]}
|
||
Local models card game?
| 9 |
Each time I come over here I have flashbacks about the "Top Trumps" card games I used to play at school. I'd really love to know if someone has produced a deck for local models already? The specs at the bottom could match benchmarks or other metrics like TTFT, Context size, modalities, ... There could be variants for different model sizes and fine-tunes. Little country flag in a top corner. Could also include a few proprietary models for the satisfaction of beating them with open ones.
| 2025-04-17T16:31:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1h4vj/local_models_card_game/
|
gnddh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1h4vj
| false | null |
t3_1k1h4vj
|
/r/LocalLLaMA/comments/1k1h4vj/local_models_card_game/
| false | false |
self
| 9 | null |
DreamGen Lucid Nemo 12B: Story-Writing & Role-Play Model
| 115 |
Hey everyone!
I am happy to share my latest model **focused on story-writing and role-play**: [dreamgen/lucid-v1-nemo](https://huggingface.co/dreamgen/lucid-v1-nemo) (GGUF and EXL2 available - thanks to bartowski, mradermacher and lucyknada).
Is Lucid worth your precious bandwidth, disk space and time? I don't know, but here's a bit of info about Lucid to help you decide:
- Focused on role-play & story-writing.
- Suitable for all kinds of writers and role-play enjoyers:
- For world-builders who want to specify every detail in advance: plot, setting, writing style, characters, locations, items, lore, etc.
- For intuitive writers who start with a loose prompt and shape the narrative through instructions (OCC) as the story / role-play unfolds.
- Support for **multi-character role-plays**:
- Model can automatically pick between characters.
- Support for **inline writing instructions (OOC)**:
- Controlling plot development (say what should happen, what the characters should do, etc.)
- Controlling pacing.
- etc.
- Support for **inline writing assistance**:
- Planning the next scene / the next chapter / story.
- Suggesting new characters.
- etc.
- Support for **reasoning (opt-in)**.
If that sounds interesting, I would love it if you check it out and let me know how it goes!
The README has **extensive documentation, examples and SillyTavern presets!**
| 2025-04-17T16:31:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1h5eh/dreamgen_lucid_nemo_12b_storywriting_roleplay/
|
DreamGenAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1h5eh
| false | null |
t3_1k1h5eh
|
/r/LocalLLaMA/comments/1k1h5eh/dreamgen_lucid_nemo_12b_storywriting_roleplay/
| false | false |
self
| 115 |
{'enabled': False, 'images': [{'id': 'r5aVkuTYLtn6x2h06Ar_TJtKi5K2OOogR0l5NTfP_aY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=108&crop=smart&auto=webp&s=c4addf2fc0da6d3d2ca3246651c353987fc5b22e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=216&crop=smart&auto=webp&s=61afdb6f59d1b243d946e7873fee9a3f4bbaf686', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=320&crop=smart&auto=webp&s=da40fee7dfb9bab79eae0796507e92ab1a057864', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=640&crop=smart&auto=webp&s=df0867c40a636723cc4092d3cbb043a84585dd40', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=960&crop=smart&auto=webp&s=ac1894c26193cfaf9eb01dd92d0bf6833ef3b505', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?width=1080&crop=smart&auto=webp&s=bb4ee152eca49826c5c3d1a63834a906629c3ecc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zn7yqvlMarMqhiFdn3mnTqKD6ExrPs9mYtPI3gXPrKY.jpg?auto=webp&s=5cace799dd9f60aa7bf33462a10cd803bb1fe565', 'width': 1200}, 'variants': {}}]}
|
Any recommendations for a local subtitle (SRT file) translation tool?
| 1 |
[removed]
| 2025-04-17T16:42:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1hf7m/any_recommendations_for_a_local_subtitle_srt_file/
|
Frostgiven
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1hf7m
| false | null |
t3_1k1hf7m
|
/r/LocalLLaMA/comments/1k1hf7m/any_recommendations_for_a_local_subtitle_srt_file/
| false | false |
self
| 1 | null |
BLT model weights just dropped - 1B and 7B Byte-Latent Transformers released!
| 249 |
https://x.com/gargighosh/status/1912908118939541884
https://github.com/facebookresearch/blt/pull/97
https://ai.meta.com/blog/meta-fair-updates-perception-localization-reasoning/
paper: https://arxiv.org/abs/2412.09871
| 2025-04-17T16:50:53 |
https://www.reddit.com/gallery/1k1hm53
|
QuackerEnte
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1hm53
| false | null |
t3_1k1hm53
|
/r/LocalLLaMA/comments/1k1hm53/blt_model_weights_just_dropped_1b_and_7b/
| false | false | 249 | null |
|
RubyLLM 1.2 now supports Ollama! One Ruby line to chat with your local LLMs
| 1 |
Hey LocalLLaMA folks! Just released RubyLLM 1.2.0 which brings support for any OpenAI-compatible API, including Ollama! Here's how simple it is to chat with your local models:
```ruby
RubyLLM.configure { |c| c.openai_api_base = "http://localhost:11434/v1" }
chat = RubyLLM.chat(model: "llama2", provider: :openai, assume_model_exists: true)
chat.ask "What's your favorite food?"
```
Quick demo: https://youtu.be/7MjhABqifCo
RubyLLM gives you a clean Ruby interface for:
- Local models via Ollama
- Custom deployments through LM Studio
- Any other OpenAI-compatible setup
Perfect if you're building Ruby apps and want to keep your AI local!
Links:
- Docs: https://rubyllm.com
- GitHub: https://github.com/crmne/ruby_llm
| 2025-04-17T17:24:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ifqk/rubyllm_12_now_supports_ollama_one_ruby_line_to/
|
crmne
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ifqk
| false | null |
t3_1k1ifqk
|
/r/LocalLLaMA/comments/1k1ifqk/rubyllm_12_now_supports_ollama_one_ruby_line_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-pfAWimwk20iTUax963oP90FuPkSaW5jC3ehEgXXfWM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sY-wzYI7zlr7hq_JZZ4Z1t6Z47wF19bKTOYB8sy8cqw.jpg?width=108&crop=smart&auto=webp&s=af75db28cacf6a6be96a4d6e7f559fc6a718b585', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/sY-wzYI7zlr7hq_JZZ4Z1t6Z47wF19bKTOYB8sy8cqw.jpg?width=216&crop=smart&auto=webp&s=cd4b0db9691a8511221fb5252aecd9bef59cc328', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/sY-wzYI7zlr7hq_JZZ4Z1t6Z47wF19bKTOYB8sy8cqw.jpg?width=320&crop=smart&auto=webp&s=5212f5f81d94cc19e8c6f13be31b19c44b09a252', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/sY-wzYI7zlr7hq_JZZ4Z1t6Z47wF19bKTOYB8sy8cqw.jpg?auto=webp&s=34b3d6cdc2ac094dc73de6a7acc98182b1c1d5bc', 'width': 480}, 'variants': {}}]}
|
What are the people dropping >10k on a setup using it for?
| 165 |
Surprisingly often I see people on here asking for advice on what to buy for local llm inference/training with a budget of >10k $. As someone who uses local llms as a hobby, I myself have bought a nice macbook and a rtx3090 (making it a pretty expensive hobby).
But i guess when spending this kind of money, it serves a deeper purpose than just for a hobby right?
So what are yall spending this kind of money using it for?
| 2025-04-17T17:24:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ifw5/what_are_the_people_dropping_10k_on_a_setup_using/
|
Ashefromapex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ifw5
| false | null |
t3_1k1ifw5
|
/r/LocalLLaMA/comments/1k1ifw5/what_are_the_people_dropping_10k_on_a_setup_using/
| false | false |
self
| 165 | null |
is there a cursor/windsurf like tool that works fully with local LLMs?
| 1 |
[removed]
| 2025-04-17T17:26:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1igy0/is_there_a_cursorwindsurf_like_tool_that_works/
|
Sad-Seesaw-3843
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1igy0
| false | null |
t3_1k1igy0
|
/r/LocalLLaMA/comments/1k1igy0/is_there_a_cursorwindsurf_like_tool_that_works/
| false | false |
self
| 1 | null |
How to scale LLM-based tabular data retrieval to millions of rows
| 1 |
[removed]
| 2025-04-17T17:26:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ihpb/how_to_scale_llmbased_tabular_data_retrieval_to/
|
Impressive_Maximum32
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ihpb
| false | null |
t3_1k1ihpb
|
/r/LocalLLaMA/comments/1k1ihpb/how_to_scale_llmbased_tabular_data_retrieval_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3LhjFuZveOlXXrMZetI_YVpifHZOnRHR1kc9_Grn2Qw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=108&crop=smart&auto=webp&s=78efb04d081b3042b2c68d494c44aff1888ded47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=216&crop=smart&auto=webp&s=66f5fa811bea32601d5a78063ddb98a437abfaa9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=320&crop=smart&auto=webp&s=b8c50a7f15f340a3fd10fada07c36fc46f0e7c7e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=640&crop=smart&auto=webp&s=da934dde77461ada732b5a71f4fa7e6f4bc7738f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=960&crop=smart&auto=webp&s=f80288bed370d0fb5eaa2f72d351c6aea0db2dd9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?width=1080&crop=smart&auto=webp&s=24e97874b382fef2648e88bce91336de42fc6e4d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Uymx5AzmOW0NN8AGbRiyUFFq1MbW3o3EKWK7m4IfkkI.jpg?auto=webp&s=13497e5c6885dac20482c007ebadb70f84b6e806', 'width': 1200}, 'variants': {}}]}
|
Smallest model for tool/mcp usecase
| 2 |
Hi everyone, My usecase is involves usage of llm with bunch of tools (around 20-25 tools). Due to resource constriant(16gb vram) I need to make use of smallest llm which can be run on my t4 gpu. Which model/s best suits for my usecase? Help me in finding the right llm
Thanks in advance
| 2025-04-17T17:31:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ilig/smallest_model_for_toolmcp_usecase/
|
NovelNo2600
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ilig
| false | null |
t3_1k1ilig
|
/r/LocalLLaMA/comments/1k1ilig/smallest_model_for_toolmcp_usecase/
| false | false |
self
| 2 | null |
Geobench - A benchmark to measure how well llms can pinpoint the location based on a Google Streetview image.
| 146 |
Link: [https://geobench.org/](https://geobench.org/)
Basically it makes llms play the game GeoGuessr, and find out how well each model performs on common metrics in the GeoGuessr community - if it guess the correct country, the distance between its guess and the actual location (measured by average and median score)
Credit to the original site creator Illusion.
| 2025-04-17T17:34:19 |
https://www.reddit.com/gallery/1k1io81
|
Jupaoqqq
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1io81
| false | null |
t3_1k1io81
|
/r/LocalLLaMA/comments/1k1io81/geobench_a_benchmark_to_measure_how_well_llms_can/
| false | false | 146 | null |
|
RTX 5070 ti not support chat with RTX
| 1 |
[removed]
| 2025-04-17T17:47:17 |
EssamGoda
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1izt7
| false | null |
t3_1k1izt7
|
/r/LocalLLaMA/comments/1k1izt7/rtx_5070_ti_not_support_chat_with_rtx/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '9AeEjwfnI6bAvk8bwcYoe3CqvYaQOUPDbDkU8wCTch0', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/5qh9jfx5nfve1.png?width=108&crop=smart&auto=webp&s=26828e1fe6e48343cd97d3637b794e7fd7a37b97', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/5qh9jfx5nfve1.png?width=216&crop=smart&auto=webp&s=e89663293d47c9650d2b0e357f55976d2b17875b', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/5qh9jfx5nfve1.png?width=320&crop=smart&auto=webp&s=178464b3523ea9263aef11fed0f4b37c6d042189', 'width': 320}], 'source': {'height': 444, 'url': 'https://preview.redd.it/5qh9jfx5nfve1.png?auto=webp&s=c82cae439d21a2088b0547061314c9d65262c2d3', 'width': 594}, 'variants': {}}]}
|
||
Uncensored model cloud deployment
| 0 |
Does anyone here have experience with deploying an uncensored/abliterated model in the cloud? I have a use case for which I need an uncensored model, but I don't have enough RAM on my local machine, but deploying it on GCP seems to be rather expensive.
It would probably be cheapest to find a provider who already hosts these models for inference instead of deploying your own machine, but I can't find anyone doing that.
| 2025-04-17T17:47:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1j00x/uncensored_model_cloud_deployment/
|
Studio-Miserable
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1j00x
| false | null |
t3_1k1j00x
|
/r/LocalLLaMA/comments/1k1j00x/uncensored_model_cloud_deployment/
| false | false |
self
| 0 | null |
What are some more out there reward functions that we can use to train reasoning models?
| 3 |
My friend and I have been working on some research into different reward functions, among which was using deepseek r1 to predict stocks [https://2084.substack.com/p/deepstock-v2](https://2084.substack.com/p/deepstock-v2)
The most interesting thing that happened tho was that after about 100 steps, the model seemed to get an AHA moment, similar to the original Deepseek paper.(we even got random chinese scattered in the reasoning trace)
**Early Training (Step 1):**
{"step": 1, "ticker": "MMM", "current_date": "2023-11-28", "actual_change": -0.040597149957399825, "predicted_direction": "up", "predicted_pct": 2.5, "reward": 1.899999976158142, "loss": -0.10356301069259644, "thinking": "The recent news indicates that 3M (MMM) has been recognized as one of the most undervalued industrial stocks and blue-chip stocks to buy, which suggests positive market sentiment. Additionally, the stock has already seen a significant increase since the last earnings report, indicating that investors might be cautious about further gains. The current price of $79.61 is quite high compared to its previous close of $75.95, which implies that the stock may be due for a correction. However, considering the recent positive recognition and the potential for continued market rally, there is still room for growth."}
**Later Training (Step 17):**
{"step": 17, "ticker": "MMM", "current_date": "2021-10-12", "actual_change": 3.6517989085434293, "predicted_direction": "up", "predicted_pct": 1.5, "reward": 2.1600000858306885, "loss": -0.1374872475862503, "thinking": "The stock has experienced a decline of 1.32% today, which suggests some selling pressure. However, the volume is still relatively high at 579 million shares, indicating that the selling might be driven by profit-taking rather than a significant shift in sentiment. Recent news indicates that US stocks are showing mixed trends, with some positive developments for cyclicals but also concerns around inflation and tapering fears. 3M, being a diversified company, is less likely to be heavily affected by these broad market sentiments compared to more specialized firms. The P/E ratio of 12.44 suggests the stock is fairly valued, and the low debt-to-equity ratio of 0.08 indicates strong financial health. Additionally, there are positive sentiments towards 3M in the recent news, such as \"Why 3M Stock Is a Buy,\" which could help counteract any negative sentiment."}
I think that there's definitely something here with the model getting better at reasoning financially in general from being trained to predict stocks - kinda similar to investment bankers, who are trained to evaluate companies by having them do a million discounted cashflow analysises, or how the original model got better at logic by having it do mathematics. One of the things I'm working on as an expansion of this is having the model being able to do toolcalling and still be GRPO trained, and then applying it to a bunch of other domains, like reconciliation of invoices or other things, and see if that makes the model better at reasoning in general.
What domains do you think have an interesting objectively calculatable reward function that I could potentially throw a reasoning model at?
| 2025-04-17T17:54:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1j6ld/what_are_some_more_out_there_reward_functions/
|
ExaminationNo8522
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1j6ld
| false | null |
t3_1k1j6ld
|
/r/LocalLLaMA/comments/1k1j6ld/what_are_some_more_out_there_reward_functions/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'X2wCOahdko6Ksqxhhe7wO5bS6vAzg3NcWrAwAVyPbn0', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=108&crop=smart&auto=webp&s=46656476278603e33ad1228b73bf2d6077c6c49b', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=216&crop=smart&auto=webp&s=6bb9b53bf03744800a533908a012140df899a508', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=320&crop=smart&auto=webp&s=b070ddd13c443b1eee333f4bb4e71a8a67086a9c', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=640&crop=smart&auto=webp&s=95b39288d31015ba64f08ad5567735271b34eab7', 'width': 640}, {'height': 492, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=960&crop=smart&auto=webp&s=664e994509a8eb96a72082687017af8f5786a4f9', 'width': 960}, {'height': 553, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?width=1080&crop=smart&auto=webp&s=a1dc90e4c0cabf20da09d8751d3ec822c8be91e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H6RFU8qaieLnNd3GWLfYUx9CHi42niUFbmEAn1LcchI.jpg?auto=webp&s=5fd9714aa8808a143de1826d8a4993a66a1f1c84', 'width': 1170}, 'variants': {}}]}
|
SpaceThinker - Test Time Compute for Quantitative Spatial Reasoning
| 13 |
This VLM is tuned to perform quantitative spatial reasoning tasks like estimating distances and sizes.
Especially suitable for embodied AI applications that can benefit from thinking about how to move around our 3D world.
https://preview.redd.it/r668pf4osfve1.png?width=1024&format=png&auto=webp&s=1fdff1129ad038c11737cbbfd240b14635923a88
Model: [https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B](https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B)
Data: [https://huggingface.co/datasets/remyxai/SpaceThinker](https://huggingface.co/datasets/remyxai/SpaceThinker)
Code: [https://github.com/remyxai/VQASynth](https://github.com/remyxai/VQASynth)
Following up with .gguf weights, running demo, VLMEvalKit \[QSpatial evaluation\](https://github.com/open-compass/VLMEvalKit/blob/ac1beb8bf164174393219a6e06220b8d3a5427b1/README.md?plain=1#L38)
| 2025-04-17T18:24:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1jwcf/spacethinker_test_time_compute_for_quantitative/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1jwcf
| false | null |
t3_1k1jwcf
|
/r/LocalLLaMA/comments/1k1jwcf/spacethinker_test_time_compute_for_quantitative/
| false | false | 13 |
{'enabled': False, 'images': [{'id': 'Qk6f2StLamId6IHwTNpIoSoltVXlNaQIQ76yuarlVY8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=108&crop=smart&auto=webp&s=e2aa1af7cd46111d8c10b5a5c30b817f2b6d3538', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=216&crop=smart&auto=webp&s=eb2790fdeb719b668dd058f1faf0db5585d0b43b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=320&crop=smart&auto=webp&s=b6c4a833666e9b0195b3b8122e7787c5d3b9a50c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=640&crop=smart&auto=webp&s=36dc93c4716b9ae36a60a46bc08f50cf822710ff', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=960&crop=smart&auto=webp&s=5710253af63b4004412fc154739aaa0ca29c86b1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?width=1080&crop=smart&auto=webp&s=cc4df142888cfbb9c04e73d0387e85bcd3cbbfe7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9I0NMhrjklgoUfNZEEbkNlZGLc5I7Qt70p0AXu7rvJM.jpg?auto=webp&s=b7523ceba59263d3b434971ac1bfa597192db108', 'width': 1200}, 'variants': {}}]}
|
|
Don't chase the latest agent framework - develop a mental model that separates the lower-level vs. high-level logic for agents, and then pick the right abstractions.
| 1 |
[removed]
| 2025-04-17T18:55:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1knog/dont_chase_the_latest_agent_framework_develop_a/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1knog
| false | null |
t3_1k1knog
|
/r/LocalLLaMA/comments/1k1knog/dont_chase_the_latest_agent_framework_develop_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'uTGbxQqdPwinKTYD44tbuJNUZQICcOLO_zOExJA2Xro', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=108&crop=smart&auto=webp&s=79c69176c9e3f7e5205c19efe539840a04aceb9a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=216&crop=smart&auto=webp&s=9060cc53c4ab3b5718ec3bedc143b0519e17a2a5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=320&crop=smart&auto=webp&s=6facc5b753b406f94b58873738d32aff3c049666', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=640&crop=smart&auto=webp&s=1a09f9bd452076e6ff6b9cba072601a27cfddb67', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=960&crop=smart&auto=webp&s=57ca3556dc75e575a9c0dd07a5731b747b46e272', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?width=1080&crop=smart&auto=webp&s=a4099143af22fb7ec62fe56599b363392d7b62e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/091VVnz3bOU5xke4oBab5mmShMMhwSh2u4l0qiJDCyA.jpg?auto=webp&s=94207240c69d3b3bbc19a35ec9bb7f24bbb46033', 'width': 1200}, 'variants': {}}]}
|
Don't chase agent frameworks - develop a mental model that separates the lower-level vs. high-level logic for agents, and then pick the right abstractions.
| 1 |
[removed]
| 2025-04-17T18:57:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1kpe4/dont_chase_agent_frameworks_develop_a_mental/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1kpe4
| false | null |
t3_1k1kpe4
|
/r/LocalLLaMA/comments/1k1kpe4/dont_chase_agent_frameworks_develop_a_mental/
| false | false |
self
| 1 | null |
Best local multilingual (Spanish) TTS model for fast inference?
| 4 |
Hello everyone. I'm working on an assistant that speaks Spanish, my current implementation uses XTTS, but inference is really slow for realtime applications.
Do you know any other fast model, that can be trained to Spanish with custom voices?
Thanks for the attention, people.
| 2025-04-17T18:58:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1kq0d/best_local_multilingual_spanish_tts_model_for/
|
Frydesk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1kq0d
| false | null |
t3_1k1kq0d
|
/r/LocalLLaMA/comments/1k1kq0d/best_local_multilingual_spanish_tts_model_for/
| false | false |
self
| 4 | null |
Why is there no multimodel model that can output and/or edit presentation slides or similar?
| 1 |
See title. Its curious that this seems to be a little research area, when it would affect the daily work of many people - even more so than code models.
| 2025-04-17T19:02:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1kt3k/why_is_there_no_multimodel_model_that_can_output/
|
cpldcpu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1kt3k
| false | null |
t3_1k1kt3k
|
/r/LocalLLaMA/comments/1k1kt3k/why_is_there_no_multimodel_model_that_can_output/
| false | false |
self
| 1 | null |
[D] Daily Paper Discussions on the Yannic Kilcher Discord - InternVL3
| 1 |
[removed]
| 2025-04-17T19:25:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1lcq2/d_daily_paper_discussions_on_the_yannic_kilcher/
|
CATALUNA84
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1lcq2
| false | null |
t3_1k1lcq2
|
/r/LocalLLaMA/comments/1k1lcq2/d_daily_paper_discussions_on_the_yannic_kilcher/
| false | false | 1 | null |
|
for developers: your thoughts on ai agents using money.
| 1 |
[removed]
| 2025-04-17T19:29:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1lgn4/for_developers_your_thoughts_on_ai_agents_using/
|
MetabolicPathway
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1lgn4
| false | null |
t3_1k1lgn4
|
/r/LocalLLaMA/comments/1k1lgn4/for_developers_your_thoughts_on_ai_agents_using/
| false | false |
self
| 1 | null |
[D] Daily Paper Discussions on the Yannic Kilcher Discord - InternVL3
| 1 |
[removed]
| 2025-04-17T19:33:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ljzt/d_daily_paper_discussions_on_the_yannic_kilcher/
|
CATALUNA84
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ljzt
| false | null |
t3_1k1ljzt
|
/r/LocalLLaMA/comments/1k1ljzt/d_daily_paper_discussions_on_the_yannic_kilcher/
| false | false | 1 | null |
|
Swarm Debugging with MCP
| 4 |
Everyone's looking at MCP as a way to connect LLMs to tools.
What about connecting LLMs to other LLM agents?
I built Deebo, the first ever agent MCP server. Your coding agent can start a session with Deebo through MCP when it runs into a tricky bug, allowing it to offload tasks and work on something else while Deebo figures it out asynchronously.
Deebo works by spawning multiple subprocesses, each testing a different fix idea in its own Git branch. It uses any LLM to reason through the bug and returns logs, proposed fixes, and detailed explanations. The whole system runs on natural process isolation with zero shared state or concurrency management. Look through the code yourself, it’s super simple.
If you're on Cline or Claude Desktop, installation is as simple as npx deebo-setup@latest.
[Here’s the repo. Take a look at the code!](https://github.com/snagasuri/deebo-prototype)
Deebo scales to real codebases too. Here, it launched 17 scenarios and [diagnosed a $100 bug bounty issue in Tinygrad.](https://github.com/snagasuri/deebo-prototype/blob/master/memory-bank/9bd38e9840d3/progress.md)
[You can find the full logs for that run here.](https://github.com/snagasuri/deebo-prototype/tree/master/memory-bank/9bd38e9840d3/sessions/session-1744006973678)
Would love feedback from devs building agents or running into flow-breaking bugs during AI-powered development.
| 2025-04-17T19:43:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1lrtd/swarm_debugging_with_mcp/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1lrtd
| false | null |
t3_1k1lrtd
|
/r/LocalLLaMA/comments/1k1lrtd/swarm_debugging_with_mcp/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'oIz1LWMDDarsD7l28jE3UmRS2Hv_N5VYOgdoLgNS1dE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=108&crop=smart&auto=webp&s=d9bf1a6e1a841cfd66579db783456c5402afa020', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=216&crop=smart&auto=webp&s=3b3c3a75c4915da2d850a13e7929ff374f3fca0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=320&crop=smart&auto=webp&s=93723adf86dc4c0332088b397c789b74085d9eb6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=640&crop=smart&auto=webp&s=a92ade836de5793244e00fb5cfe5e95cecfc76fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=960&crop=smart&auto=webp&s=92d4d9c9a6b10c71b89b48655f72defddf2bb892', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?width=1080&crop=smart&auto=webp&s=989f98ec09b895498a30dc65a2ac92b850690fb6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X6godrytfElo-Qux5B6kMBCl0GODkAHdNGZdfJtJ6FE.jpg?auto=webp&s=6e061874b46cb580282d7a08c10e4a508ce8e43f', 'width': 1200}, 'variants': {}}]}
|
Gemini 2.5 Flash (500RPD free)
| 82 | 2025-04-17T19:43:14 |
secopsml
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1lrya
| false | null |
t3_1k1lrya
|
/r/LocalLLaMA/comments/1k1lrya/gemini_25_flash_500rpd_free/
| false | false | 82 |
{'enabled': True, 'images': [{'id': '6UYf1kRV_9pTNK-c0Qb8omJINYEg132yFQhEMG7Tw-w', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/uwg0peza7gve1.png?width=108&crop=smart&auto=webp&s=2e3284379ca0012fe281d99c81260113cf54cf1f', 'width': 108}, {'height': 49, 'url': 'https://preview.redd.it/uwg0peza7gve1.png?width=216&crop=smart&auto=webp&s=b68661ce52528bbcf4781d0bc1909f690e0d4437', 'width': 216}, {'height': 72, 'url': 'https://preview.redd.it/uwg0peza7gve1.png?width=320&crop=smart&auto=webp&s=5f727724ab78a6312bb60f53052ddcbe90e34675', 'width': 320}], 'source': {'height': 78, 'url': 'https://preview.redd.it/uwg0peza7gve1.png?auto=webp&s=ef3b7c82f55e4865ff45847fb7ce6bf95544b7b3', 'width': 342}, 'variants': {}}]}
|
|||
Fine-tuning question
| 5 |
Hi! So I've been quite involved in the local and generally llm area for a bit and am thinking on fine-tuning a model for personal use
So what I've found for my use case is that I've managed to find a model that through prompting techniques produces the format and style of generation I want, so I don't need to actually fine-tune the model to fulfill a specific task
What I've found lacking, is that the model doesn't seem to have a lot of general/specific knowledge on the specific topics that I'm interested in. Is it possible to simply fine-tune a lora on the base model on raw text/no instruct formatting and apply/merge the base lora onto the specific instruct model that I'm using?
Does this work? I'm quite new to the actually fineting/merge/lora etc.
| 2025-04-17T19:58:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1m52i/finetuning_question/
|
Federal_Order4324
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1m52i
| false | null |
t3_1k1m52i
|
/r/LocalLLaMA/comments/1k1m52i/finetuning_question/
| false | false |
self
| 5 | null |
Every time I see an open source alternative to a trending proprietary agent
| 42 | 2025-04-17T20:20:11 |
iamnotdeadnuts
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1mnqk
| false | null |
t3_1k1mnqk
|
/r/LocalLLaMA/comments/1k1mnqk/every_time_i_see_an_open_source_alternative_to_a/
| false | false | 42 |
{'enabled': True, 'images': [{'id': 'HAPVvqOwhuZtNjivEUl1iJNB_xpWi9BsQacmYhvh5RM', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ueghfakmegve1.jpeg?width=108&crop=smart&auto=webp&s=e3e44370dffcdb382648678a76d21a6abf3d3568', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/ueghfakmegve1.jpeg?width=216&crop=smart&auto=webp&s=6a76fd9bb4dd57c5571de9657291ed15868a9506', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/ueghfakmegve1.jpeg?width=320&crop=smart&auto=webp&s=46c3eb7b92d94ddbf50c2bb7fabc1bf65745026f', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/ueghfakmegve1.jpeg?width=640&crop=smart&auto=webp&s=726cf7ec57e3a0ca01f38535f67b5b186eeb9f0b', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/ueghfakmegve1.jpeg?auto=webp&s=814e9ceb669c6b9091721538f41f5d6d9e81cf15', 'width': 750}, 'variants': {}}]}
|
|||
I found AGI
| 169 |
new model just dropped
| 2025-04-17T20:23:10 |
gucci-grapes
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1mq98
| false | null |
t3_1k1mq98
|
/r/LocalLLaMA/comments/1k1mq98/i_found_agi/
| false | false | 169 |
{'enabled': True, 'images': [{'id': 'VoXT0y1M87Uf3qyRaXwoV19kCGA1f0gfazaoB2tXeow', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=108&crop=smart&auto=webp&s=5bb288efb1008d1f31d78493b892c1bc94843a29', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=216&crop=smart&auto=webp&s=b3589081adcb8b3cc3bda50e2980ab500f872d3e', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=320&crop=smart&auto=webp&s=34c6edcc38e5bf88130e5de27984f4e4736bce6c', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=640&crop=smart&auto=webp&s=9fa9f468116b658f1f67e115a46dea460537335f', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=960&crop=smart&auto=webp&s=81a7d90664c6feeea479e69c2b66858600d43c7b', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?width=1080&crop=smart&auto=webp&s=e531b509d51954f8b00f0c383bedc8b3f65cdf5d', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/fqdi3ms5fgve1.jpeg?auto=webp&s=faf1de4f00b2d5cacbb2f1784a84cc2be46e70da', 'width': 4032}, 'variants': {}}]}
|
||
LMArena public beta officially releases with a new UI. (No more gradio) | https://beta.lmarena.ai
| 54 | 2025-04-17T20:33:00 |
https://www.reddit.com/gallery/1k1myni
|
HostFit8686
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1myni
| false | null |
t3_1k1myni
|
/r/LocalLLaMA/comments/1k1myni/lmarena_public_beta_officially_releases_with_a/
| false | false | 54 | null |
||
Almost 2 weeks since Llama4 and still no other open release
| 0 |
It has been almost 2 weeks (considering Easter holidays until Monday) since Llama4 (M+S) release and no other lab has released any open models. It looks like meta might not have any valid inside info and they panick released otherwise they could have waited until llamacon atleast. It's also possible that Qwen3 comes around the same time as Llama con and R2 maybe 1 or 2 weeks after that.
| 2025-04-17T20:38:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1n3l4/almost_2_weeks_since_llama4_and_still_no_other/
|
Strong-Inflation5090
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1n3l4
| false | null |
t3_1k1n3l4
|
/r/LocalLLaMA/comments/1k1n3l4/almost_2_weeks_since_llama4_and_still_no_other/
| false | false |
self
| 0 | null |
Mistral Nemo VS Gemma 3 12b memory usage
| 2 |
I have both installed via ollama on a macbook. When running NeMo, ollama's memory usage is around 1.5Gb to start with. However, when running Gemma 3 12b q4 memory usage is more than 9Gb from idle. Is this normal? Does the QaT Gemma 3 12b use fewer resources?
| 2025-04-17T20:41:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1n62b/mistral_nemo_vs_gemma_3_12b_memory_usage/
|
No-Report-1805
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1n62b
| false | null |
t3_1k1n62b
|
/r/LocalLLaMA/comments/1k1n62b/mistral_nemo_vs_gemma_3_12b_memory_usage/
| false | false |
self
| 2 | null |
Gemini 2.5 Flash is here!!!
| 84 | 2025-04-17T20:46:30 |
https://deepmind.google/technologies/gemini/flash/
|
AggressiveDick2233
|
deepmind.google
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1n9u7
| false | null |
t3_1k1n9u7
|
/r/LocalLLaMA/comments/1k1n9u7/gemini_25_flash_is_here/
| false | false | 84 |
{'enabled': False, 'images': [{'id': 'Jzpx9kE0BnP22uW77VGhFpqpblAyIs692y2CoFCZESA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=108&crop=smart&auto=webp&s=39ccab06aa504c6068cccc369da6ac0c23b68318', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=216&crop=smart&auto=webp&s=b499c4676c991b534f887bf49b3c5d5a1325b912', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=320&crop=smart&auto=webp&s=8f20803a2e122e7eaf50a6de8c50f70df7f71fe3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=640&crop=smart&auto=webp&s=ec634c613e8198b0042641f2379340ebeaa69fed', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=960&crop=smart&auto=webp&s=73296e78949735f2a11e26e13acc7353ad468a1b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?width=1080&crop=smart&auto=webp&s=12c3018f99a6bca5fe06023fc8ac48a3f4698201', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rPD-NS_6RUp46fIC2O-IJ7e34NrinaxGoKGL7y3tTAM.jpg?auto=webp&s=7a5c646bb00d961374493494fb556454e043f94e', 'width': 1200}, 'variants': {}}]}
|
||
Inspired by the spinning heptagon test I created the forest fire simulation test (prompt in comments)
| 201 | 2025-04-17T21:00:19 |
https://v.redd.it/eni76fldkgve1
|
jd_3d
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1nle9
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/eni76fldkgve1/DASHPlaylist.mpd?a=1747515641%2COTQ3NjM3YTg2YTQ0MjVlYTVjODI4MGNiNzI2OWIxMWFkMGViMjQwNjYyMTk1YjM2M2MyOTQxNmI0MmUzN2IzNA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/eni76fldkgve1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/eni76fldkgve1/HLSPlaylist.m3u8?a=1747515641%2CODA0OWM4MzdhOWE0YWY2OTA3ZmRhMzA0YWRjODQ4MmRiMjZmYWVmYzQ5MDE0ODJlOTVlMDc1ZDg2NzdjNTVkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eni76fldkgve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1k1nle9
|
/r/LocalLLaMA/comments/1k1nle9/inspired_by_the_spinning_heptagon_test_i_created/
| false | false | 201 |
{'enabled': False, 'images': [{'id': 'Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=108&crop=smart&format=pjpg&auto=webp&s=6fe71db791a2e3c8e9b99cdebd4fb721c7edb3f6', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=216&crop=smart&format=pjpg&auto=webp&s=d541d6224083eb06847d7f4c678c054f79723630', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=320&crop=smart&format=pjpg&auto=webp&s=f36b8d6469a71937f0a6f0f173d5b3da404f91d9', 'width': 320}, {'height': 240, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=640&crop=smart&format=pjpg&auto=webp&s=1beb20f2b87e3da61cbe74dc70e4361c817aee57', 'width': 640}, {'height': 360, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=960&crop=smart&format=pjpg&auto=webp&s=be2cc1e5c35596f0677cae5462a0795f47ee6db7', 'width': 960}, {'height': 405, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3001c6ef2c0d1f8a336f316b9d102a21bcdc8c6a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z2JsMWRwbGRrZ3ZlMYv6snaPu9pbXPiAyfaNKBLULaPCIm0pnygnovycxPje.png?format=pjpg&auto=webp&s=edfedc91bc0ca15216592b824b6a99be924cafcc', 'width': 1920}, 'variants': {}}]}
|
||
Gemini 2.5 Flash is here!
| 1 |
[removed]
| 2025-04-17T21:07:32 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1nrl3
| false | null |
t3_1k1nrl3
|
/r/LocalLLaMA/comments/1k1nrl3/gemini_25_flash_is_here/
| false | false |
default
| 1 | null |
||
Gemini 2.5 is here!
| 1 | 2025-04-17T21:08:18 |
Fluffy_Sheepherder76
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ns9f
| false | null |
t3_1k1ns9f
|
/r/LocalLLaMA/comments/1k1ns9f/gemini_25_is_here/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'xx-hkjWX9iaBa752osv9kpWJAWXK-h2ft0AvRiSKJeA', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=108&crop=smart&auto=webp&s=b304f6420789014c9e47a0d0af77d94766d7ecb5', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=216&crop=smart&auto=webp&s=d08bb1f7ef65d130b8c1deab9ea89d7df916bf32', 'width': 216}, {'height': 470, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=320&crop=smart&auto=webp&s=3f5af4b45a6823db592eefac6fdfae3a6ab24e28', 'width': 320}, {'height': 940, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=640&crop=smart&auto=webp&s=be9e06c748951852d52cbad305b865deaf41b61c', 'width': 640}, {'height': 1410, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=960&crop=smart&auto=webp&s=2f793be347d84e08984f9a22559376f4c09e11a3', 'width': 960}, {'height': 1586, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?width=1080&crop=smart&auto=webp&s=1831753ddf6320377555ca6eef745be71c0052f7', 'width': 1080}], 'source': {'height': 1999, 'url': 'https://preview.redd.it/qwg1ne87ngve1.png?auto=webp&s=7665decfb830e2b5e04bf2478b7b1ee9e36c9296', 'width': 1361}, 'variants': {}}]}
|
|||
Installing QaT version of Gemma 12b on ollama
| 1 |
I've downloaded the gguf file and went through the tutorial to install it but cannot run it because it doesn't find a manifest. How can I fix it?
| 2025-04-17T21:09:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1ntn9/installing_qat_version_of_gemma_12b_on_ollama/
|
No-Report-1805
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ntn9
| false | null |
t3_1k1ntn9
|
/r/LocalLLaMA/comments/1k1ntn9/installing_qat_version_of_gemma_12b_on_ollama/
| false | false |
self
| 1 | null |
Not every single input needs reasoning...right?
| 0 | 2025-04-17T21:21:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1o3gi/not_every_single_input_needs_reasoningright/
|
The-Silvervein
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1o3gi
| false | null |
t3_1k1o3gi
|
/r/LocalLLaMA/comments/1k1o3gi/not_every_single_input_needs_reasoningright/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'VGR5H4jqXQd8E29JKZ6K9R94EXHzhWQKpL_yRuvY1bE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=108&crop=smart&auto=webp&s=e7661939780923c6dccce91d77c6d8b9d4f6194f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=216&crop=smart&auto=webp&s=2657e149f3d4af842a2ed2131069a013fd9165f8', 'width': 216}], 'source': {'height': 250, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?auto=webp&s=afcb8698c026f3c616a11c4009e33a06154310cb', 'width': 250}, 'variants': {}}]}
|
||
SecondMe/Mindverse - stay away
| 56 |
Just a heads up - Mindverse/SecondMe are lowkey scamming to funnel people to their product.
How do I know? I received an email above, seemingly an invitation to proceed with my application to their AI startup. But here's the thing:
- I only use this email address on GitHub - so I know it was sourced from there
- I never applied to any jobs from Mindverse, I'm happily employed
This is the same entity that was promoting SecondMe here and on other LLM subs a week or so ago - their posts were questionable but nothing out of ordinary for LLM/AI projects. However email above is at least misleading and at most just a scam - so be aware and stay away.
| 2025-04-17T21:26:07 |
Everlier
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1o71i
| false | null |
t3_1k1o71i
|
/r/LocalLLaMA/comments/1k1o71i/secondmemindverse_stay_away/
| false | false | 56 |
{'enabled': True, 'images': [{'id': 'EKcZVtFrmS_vtsVp_gyjW9X_DBUCKKPEwcxWxlTKd7Y', 'resolutions': [{'height': 150, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=108&crop=smart&auto=webp&s=50d0d4c0edde9fa3e377b3d78dad35336412b22c', 'width': 108}, {'height': 300, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=216&crop=smart&auto=webp&s=313f5d42ab293b51e9b336fa11278352be750859', 'width': 216}, {'height': 445, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=320&crop=smart&auto=webp&s=b064e53bf8d732504b06253911b2644802e27531', 'width': 320}, {'height': 891, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=640&crop=smart&auto=webp&s=5a7d0d6e3cf5517e18cc0d17a997e82ee9ab5ee0', 'width': 640}, {'height': 1336, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=960&crop=smart&auto=webp&s=82a69ee2acfbdb9eb14e6d311809141425b2e297', 'width': 960}, {'height': 1504, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?width=1080&crop=smart&auto=webp&s=6b572c59c11fd70c604d2422a3dce38a07334122', 'width': 1080}], 'source': {'height': 1504, 'url': 'https://preview.redd.it/0fu8ie5eqgve1.png?auto=webp&s=c4df5078cf9b0ed173318815d099f8a5f0241283', 'width': 1080}, 'variants': {}}]}
|
||
What's the current, most affordable cloud GPU option for 16-32ish vram that is on demand for 1-10 minute usages at a time?
| 2 |
Hey all,
So what's the best on-demand cloud GPU solution out there at this time on lower end/consumer gear?
I need something where I can issue an API call to spin it up, push some linux commands, and then access something like comfyUI api endpoint, and then issue another API to destroy it, with the spinup mounting a disk image. So the instance would be alive a few minutes and then off. But it must work right away with no deployment delays.
What's the most affordable and best solution as of this moment? I've heard of runpod but their are grave security concerns as you're effectively running on Joe Schmoes comouter in a garage, so security and confidentiality of your data are far, far from secured.
What do you suggest?
| 2025-04-17T21:31:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k1obsd/whats_the_current_most_affordable_cloud_gpu/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1obsd
| false | null |
t3_1k1obsd
|
/r/LocalLLaMA/comments/1k1obsd/whats_the_current_most_affordable_cloud_gpu/
| false | false |
self
| 2 | null |
The Second Half
| 0 | 2025-04-17T21:32:41 |
https://ysymyth.github.io/The-Second-Half/
|
Individual-Garlic888
|
ysymyth.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1ocfb
| false | null |
t3_1k1ocfb
|
/r/LocalLLaMA/comments/1k1ocfb/the_second_half/
| false | false |
default
| 0 | null |
|
Llama 4 smells bad
| 0 |
Here is the story of Llama 4 so far, including the LM Arena drama.
| 2025-04-17T21:40:09 |
https://fastml.com/llama-4-smells-bad/
|
Foxtr0t
|
fastml.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k1oif9
| false | null |
t3_1k1oif9
|
/r/LocalLLaMA/comments/1k1oif9/llama_4_smells_bad/
| false | false |
default
| 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.