title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
โ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
โ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
โ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Android AI agent based on object detection and LLMs
| 38 |
My friend has open-sourced deki, an AI agent for Android OS.
It is an Android AI agent powered by ML model, which is fully open-sourced.
It understands whatโs on your screen and can perform tasks based on your voice or text commands.
Some examples:
\* "Write my friend "some\_name" in WhatsApp that I'll be 15 minutes late"
\* "Open Twitter in the browser and write a post about something"
\* "Read my latest notifications"
\* "Write a linkedin post about something"
Currently, it works only on Android โ but support for other OS is planned.
The ML and backend codes were also fully open-sourced.
Video prompt example:
"Open linkedin, tap post and write: hi, it is deki, and now I am open sourced. But don't send, just return"
You can find other AI agent demos and usage examples, like, code generation or object detection on github.
Github: [https://github.com/RasulOs/deki](https://github.com/RasulOs/deki)
License: GPLv3
| 2025-04-25T15:47:26 |
https://v.redd.it/isn6vhfq40xe1
|
saccharineboi
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7o884
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/isn6vhfq40xe1/DASHPlaylist.mpd?a=1748188061%2CMzZmZmQ4MGE3MDA4OGIxZTllYjRjN2QyZTQzOTg2ZDdhODNhNjcwNmRlY2EyNTgwZjI0MjFhMThlNjBjMDRlYg%3D%3D&v=1&f=sd', 'duration': 102, 'fallback_url': 'https://v.redd.it/isn6vhfq40xe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/isn6vhfq40xe1/HLSPlaylist.m3u8?a=1748188061%2CODgzMWZkYjU4ZmJhM2M4NmMyOWM4YzNlMDYwYTcwMmMzNWYyN2NmYzc1YzcyNmU3NGE0YjZkYjYyODQ2ZDZmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/isn6vhfq40xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1k7o884
|
/r/LocalLLaMA/comments/1k7o884/android_ai_agent_based_on_object_detection_and/
| false | false | 38 |
{'enabled': False, 'images': [{'id': 'cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=2ab3fe8986ab436e0dc8e639e244cb74740acfde', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=e6e9e3eab1eedb43ca00d0f5606cbe4187838608', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=57ce65ddd37d9c4835ce119c9c90a6ea0b3078f3', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=0f73a586b914f80f8a8abbcf80e4891cd489cb77', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=7a49638707055656300c3642ad90e970b6a503c8', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=baf1c3955a1101ad5c259183abc13e68d65d240f', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/cDg0M2JqZnE0MHhlMXhgiTVNaxVoQRD1C2MXVA7X3PPwZtUbpbDgJ5BuncDZ.png?format=pjpg&auto=webp&s=837dad10e15c6512f2f75d98e1aa3b2b766bba4b', 'width': 1080}, 'variants': {}}]}
|
|
We compress any BF16 model to ~70% size during inference, while keeping the output LOSSLESS so that you can fit in more ERP context or run larger models.
| 670 |
Glad to share another interesting piece of work from us:
**70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float (DF11)**
* Paper: [https://arxiv.org/abs/2504.11651](https://arxiv.org/abs/2504.11651)
* Code: [https://github.com/LeanModels/DFloat11](https://github.com/LeanModels/DFloat11)
The tl;dr of this work is super simple. We โ and several prior works โ noticed that while **BF16** is often promoted as a โmore range, less precisionโ alternative to FP16 (especially to avoid value overflow/underflow during training), **its range part (exponent bits) ends up being pretty redundant once the model is trained.**
In other words, although BF16 as a data format can represent a wide range of numbers, most trained models' exponents are plenty sparse. In practice, the exponent bits carry around 2.6 bits of actual information on average โ far from the full 8 bits they're assigned.
This opens the door for classic Huffman coding โ where shorter bit sequences are assigned to more frequent values โ to **compress the model weights** into a new data format we call **DFloat11/DF11**, resulting in a **LOSSLESS compression down to \~11 bits**.
# But isnโt this just Zip?
Not exactly. It is true that tools like Zip also leverage Huffman coding, but the tricky part here is **making it memory efficient during inference**, as end users are probably not gonna be too trilled if it just makes model checkpoint downloads a bit faster (in all fairness, smaller chekpoints means a lot when training at scale, but that's not a problem for everyday users).
What does matter to everyday users is **making the memory footprint smaller during GPU inference, which requires nontrivial efforts.** But we have figured it out, and weโve open-sourced the code.
So now you can:
* Run models that previously didnโt fit into your GPU memory.
* Or run the same model with **larger batch sizes and/or longer sequences** (very handy for those lengthy EPRs, or so I have heard).
|Model|GPU Type|Method|Successfully Run?|Required Memory|
|:-|:-|:-|:-|:-|
|Llama-3.1-405B-Instruct|8รH100-80G|BF16|โ|811.71 GB|
||| DF11 (Ours)|โ
|551.22 GB|
|Llama-3.3-70B-Instruct|1รH200-141G| BF16 |โ|141.11 GB|
||| DF11 (Ours)|โ
|96.14 GB|
|Qwen2.5-32B-Instruct|1รA6000-48G| BF16 |โ|65.53 GB|
|||DF11 (Ours)|โ
|45.53 GB|
|DeepSeek-R1-Distill-Llama-8B|1รRTX 5080-16G| BF16 |โ|16.06 GB|
||| DF11 (Ours)|โ
|11.23 GB|
# Whatโs the catch?
Like all compression work, thereโs a cost to decompressing. And here are some efficiency reports.
* On an A100 with batch size 128, DF11 is **basically just as fast** as BF16 (1.02x difference, assuming both version fits in the GPUs with the same batch size). See Figure 9.
* It is up to **38.8x faster** than CPU offloading, so if you have a model that can't be run on your GPU in BF16, but can in DF11, there are plenty sweet performance gains over CPU offloading โ one of the other popular way to run larger-than-capacity models. See Figure 3.
* With the model weight being compressed, you can use the saved real estate for larger batch size or longer context length. This is expecially significant if the model is already tightly fitted in GPU. See Figure 4.
* What about batch size 1 latency when both versions (DF11 & BF16) can fit in a single GPU? This is where DF11 is the weakest โย we observe \~40% slower (2k/100 tokens for in/out). So there is not much motivation in using DF11 if you are not trying to run larger model/bigger batch size/longer sequence length.
# Why not just (lossy) quantize to 8-bit?
**The short answer is you should totally do that if you are satisfied with the output lossy 8-bit quantization with respect to your task. But how do you really know it is always good?**
Many benchmark literature suggest that compressing a model (weight-only or otherwise) to 8-bit-ish is typically a safe operation, even though it's technically lossy. What we found, however, is that while this claim is often made in quantization papers, their benchmarks tend to focus on general tasks like MMLU and Commonsense Reasoning; which do not present a comprehensive picture of model capability.
More challenging benchmarks โ such as those involving complex reasoning โ and real-world user preferences often reveal noticeable differences. One good example is Chatbot Arena indicates the 8-bit and 16-bit Llama 3.1 405b tend to behave quite differently on some categories of tasks (e.g., Math and Coding).
Although the broader question: *โWhich specific task, on which model, using which quantization technique, under what conditions, will lead to a noticeable drop compared to FP16/BF16?โ* is likely to remain open-ended simply due to the sheer amount of potential combinations and definition of โnoticable.โ **It is fair to say that lossy quantization introduces complexities that some end-users would prefer to avoid, since it creates uncontrolled variables that must be empirically stress-tested for each deployment scenario.** DF11 offeres an alternative that avoids this concern 100%.
# What about finetuning?
Our method could potentially pair well with PEFT methods like LoRA, where the base weights are frozen. But since we compress block-wise, we canโt just apply it naively without breaking gradients. We're actively exploring this direction. If it works, if would potentially become a QLoRA alternative where you can lossly LoRA finetune a model with reduced memory footprint.
(As always, happy to answer questions or chat until my advisor notices Iโm doomscrolling socials during work hours :> )
| 2025-04-25T15:47:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7o89n/we_compress_any_bf16_model_to_70_size_during/
|
choHZ
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7o89n
| false | null |
t3_1k7o89n
|
/r/LocalLLaMA/comments/1k7o89n/we_compress_any_bf16_model_to_70_size_during/
| false | false |
self
| 670 | null |
Interactive Visualization of Grammar-Based Sampling
| 6 |
http://michaelgiba.com/grammar-based/index.html
To help me understand how structured outputs are generated through local llama I created this interactive page. Check it out!
| 2025-04-25T16:07:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7opd3/interactive_visualization_of_grammarbased_sampling/
|
Appropriate-Yak5959
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7opd3
| false | null |
t3_1k7opd3
|
/r/LocalLLaMA/comments/1k7opd3/interactive_visualization_of_grammarbased_sampling/
| false | false |
self
| 6 | null |
Multiple eGPUs โ what downsides are there?
| 10 |
I have an ITX computer, and it has one 4090 FE. I want more GPU power (donโt we all?), but Iโm reluctant to rebuild an entire new computer to fit in more GPUs.
What downsides are there to buying multiple eGPU enclosures for this?
| 2025-04-25T16:08:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7oqc2/multiple_egpus_what_downsides_are_there/
|
Amazydayzee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7oqc2
| false | null |
t3_1k7oqc2
|
/r/LocalLLaMA/comments/1k7oqc2/multiple_egpus_what_downsides_are_there/
| false | false |
self
| 10 | null |
I have made a open source Claude desktop alternative
| 0 |
[removed]
| 2025-04-25T16:11:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7osr5/i_have_made_a_open_source_claude_desktop/
|
project_ai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7osr5
| false | null |
t3_1k7osr5
|
/r/LocalLLaMA/comments/1k7osr5/i_have_made_a_open_source_claude_desktop/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '4Qrtq3NqExau8SSNN_EajxxxlpeRgnlWlNFcEAP661Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=108&crop=smart&auto=webp&s=23942d548d49761451bc77d1c17530c299bdf974', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=216&crop=smart&auto=webp&s=71bd01a1b7108e20c8a19c7915cc46122bc51675', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=320&crop=smart&auto=webp&s=534fba2e4ac08df3219ab077e4b8449d7fb85349', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=640&crop=smart&auto=webp&s=9e982f0fc299a971f8aa886ddf54e764b0da90af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=960&crop=smart&auto=webp&s=85aa1308c43f6bd761c8c2f2f2bba3cb8730397e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=1080&crop=smart&auto=webp&s=dc472d7c3472673e400f40d8998a45f5b02411ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?auto=webp&s=7847489de88f51af292241154f9cf16d001d73fd', 'width': 1200}, 'variants': {}}]}
|
|
MarOS a simple UI wrapper for ollama to easily chat with models on a local network
| 5 |
This is [MarOs](https://chatgames.itch.io/maros-ai-chat), the current UI I'm using for my chat models. It has straightforward features, save/load chats, create custom system prompts and profiles, and easy model selection from your library of ollama models. Its UI is meant to be phone friendly so you can use any device on your local network to chat.
It works with ollama so a very small number of concurrent users should work with responses being queued, depending on your hardware of course.
It also automatically handles images, switching between an image and text model when you provide an image.
The UI space is crowded, so here's another one. [MarOs AI Chat by ChatGames](https://chatgames.itch.io/maros-ai-chat)
| 2025-04-25T16:35:45 |
https://www.reddit.com/gallery/1k7pei4
|
Radiant_Dog1937
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7pei4
| false | null |
t3_1k7pei4
|
/r/LocalLLaMA/comments/1k7pei4/maros_a_simple_ui_wrapper_for_ollama_to_easily/
| false | false | 5 | null |
|
Which graphics card should I buy? Which llama/qwent etc. model to choose? Please help me, I'm a bit lost...
| 1 |
[removed]
| 2025-04-25T16:54:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7puad/which_graphics_card_should_i_buy_which_llamaqwent/
|
ed0c
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7puad
| false | null |
t3_1k7puad
|
/r/LocalLLaMA/comments/1k7puad/which_graphics_card_should_i_buy_which_llamaqwent/
| false | false |
self
| 1 | null |
Do people trying to squeeze every last GB out of their GPU use their IGPU to display to their monitor?
| 124 |
By default, just for basic display, Linux can eat 500MB, windows can eat 1.1GB. I imagine for someone with like an 8-12GB card trying to barely squeeze the biggest model they can onto the gpu by tweaking context size and quant etc., this is a highly nontrivial cost.
Unless for some reason you needed the dgpu for something else, why wouldnโt they just display using their IGPU instead? Obviously thereโs still a fixed driver overhead, but youโd save nearly a gigabyte, and in terms of simply using an IDE and a browser itโs hard to think of any drawbacks.
Am I stupid and this wouldnโt work the way I think it would or something?
| 2025-04-25T17:34:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7quqt/do_people_trying_to_squeeze_every_last_gb_out_of/
|
Golfclubwar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7quqt
| false | null |
t3_1k7quqt
|
/r/LocalLLaMA/comments/1k7quqt/do_people_trying_to_squeeze_every_last_gb_out_of/
| false | false |
self
| 124 | null |
What graphics card should I buy? Which llama/qwent (etc.) model should I choose? Please help me, I'm a bit lost...
| 1 |
[removed]
| 2025-04-25T17:35:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7quxf/what_graphics_card_should_i_buy_which_llamaqwent/
|
ed0c
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7quxf
| false | null |
t3_1k7quxf
|
/r/LocalLLaMA/comments/1k7quxf/what_graphics_card_should_i_buy_which_llamaqwent/
| false | false |
self
| 1 | null |
Are these real prices? Seems low. Never used e-bay I'm from Europe (sorry).
| 31 | 2025-04-25T17:40:53 |
Sufficient_Bit_8636
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7qzrw
| false | null |
t3_1k7qzrw
|
/r/LocalLLaMA/comments/1k7qzrw/are_these_real_prices_seems_low_never_used_ebay/
| false | false | 31 |
{'enabled': True, 'images': [{'id': '3dSPzlRMvu_6nykHbGlbT-4bdwZXOOkiD9Mie_ij6mA', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=108&crop=smart&auto=webp&s=66183784b41609091666ad4462571e6453ac29e6', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=216&crop=smart&auto=webp&s=d20ace662e08f9366608d4a6878b51046bc520a1', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=320&crop=smart&auto=webp&s=3e3333d0013faa2b0d673ae8b47655116c6c925a', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=640&crop=smart&auto=webp&s=047f748da42529f4b44f1377843f54d249423d85', 'width': 640}, {'height': 855, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=960&crop=smart&auto=webp&s=6d3792da1b9ae87aa9cd91d9238d42257094d474', 'width': 960}, {'height': 962, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?width=1080&crop=smart&auto=webp&s=5397c8530432bfebca1e2413fbeeffa10db1c970', 'width': 1080}], 'source': {'height': 1158, 'url': 'https://preview.redd.it/yiag2hdhp0xe1.png?auto=webp&s=aa6467fa95d0f2b729e5297e4599e1ebb74addeb', 'width': 1300}, 'variants': {}}]}
|
|||
SOTA Spatial Reasoning in 2025
| 45 |
The ability to accurately estimate distances from RGB image input is just at theย ๐ณ๐ฟ๐ผ๐ป๐๐ถ๐ฒ๐ฟ ๐ผ๐ณ ๐ฐ๐๐ฟ๐ฟ๐ฒ๐ป๐ ๐๐ ๐บ๐ผ๐ฑ๐ฒ๐น ๐ฐ๐ฎ๐ฝ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐.
Nonetheless, distance estimation is a ๐ฐ๐ฟ๐ถ๐๐ถ๐ฐ๐ฎ๐น ๐ณ๐ผ๐ฟ ๐ฝ๐ฒ๐ฟ๐ฐ๐ฒ๐ฝ๐๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฝ๐น๐ฎ๐ป๐ป๐ถ๐ป๐ด ๐ถ๐ป ๐ฒ๐บ๐ฏ๐ผ๐ฑ๐ถ๐ฒ๐ฑ ๐๐ ๐ฎ๐ฝ๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐ ๐น๐ถ๐ธ๐ฒ ๐ฟ๐ผ๐ฏ๐ผ๐๐ถ๐ฐ๐ which must navigate around our 3D world.
Making a ๐ผ๐ฝ๐ฒ๐ป-๐๐ฒ๐ถ๐ด๐ต๐ model ๐๐บ๐ฎ๐น๐น and ๐ณ๐ฎ๐๐ enough to run ๐ผ๐ป-๐ฑ๐ฒ๐๐ถ๐ฐ๐ฒ, using ๐ผ๐ฝ๐ฒ๐ป-๐๐ผ๐๐ฟ๐ฐ๐ฒ ๐ฐ๐ผ๐ฑ๐ฒ and ๐ฑ๐ฎ๐๐ฎ, we aim to democratize embodied AI.
I've updated the comparison among closed APIs with SOTA performance in **quantitative spatial reasoning** tasks like distance/size estimation from RGB inputs and our 3B open-weight model: SpaceThinker
The performance for the the 3B SpaceThinker lies between gpt-4o and gemini-2.5-pro in estimating distances using the QSpatial++ split of Q-Spatial-Bench.
**Evaluation Results:** [https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B#qspatial-comparison-table-42525](https://huggingface.co/remyxai/SpaceThinker-Qwen2.5VL-3B#qspatial-comparison-table-42525)
**Interesting finding:** By switching model name in [this colab](https://colab.research.google.com/drive/1buEe2QC4_pnrJwQ9XyRAH7RfaIa6pbex?usp=sharing), using the non-reasoning variant [SpaceQwen](https://huggingface.co/remyxai/SpaceQwen2.5-VL-3B-Instruct), you'll find using the [step-by-step reasoning prompt](https://github.com/andrewliao11/Q-Spatial-Bench-code/blob/main/prompt_templates/spatial_prompt_steps.txt) actually hurts performance, challenging the convention that reasoning models [don't benefit](https://huggingface.co/blog/NormalUhr/deepseek-r1-explained#74-prompt-engineering-sensitivities) from complex instructions the way non-reasoning models do.
Modifying the above colab, you can also compare SpaceThinker to it's base model to assess the performance impact due to SFT by LoRA using the SpaceThinker dataset: [https://huggingface.co/datasets/remyxai/SpaceThinker](https://huggingface.co/datasets/remyxai/SpaceThinker)
| 2025-04-25T17:51:12 |
https://www.reddit.com/gallery/1k7r8qu
|
remyxai
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7r8qu
| false | null |
t3_1k7r8qu
|
/r/LocalLLaMA/comments/1k7r8qu/sota_spatial_reasoning_in_2025/
| false | false | 45 | null |
|
Tiny Agents: a MCP-powered agent in 50 lines of code
| 150 |
Hi!
I'm a co-founder of HuggingFace and a big r/LocalLLaMA fan.
Today I'm dropping Tiny Agents, a 50 lines-of-code Agent in Javascript ๐ฅ
I spent the last few weeks diving into MCP (Model Context Protocol) to understand what the hype was about.
It is fairly simple, but still quite useful as a standard API to expose sets of Tools that can be hooked to LLMs.
But while implementing it I came to my second realization:
Once you have a MCP Client, an Agent is literally just a while loop on top of it. ๐คฏ
[https://huggingface.co/blog/tiny-agents](https://huggingface.co/blog/tiny-agents)
| 2025-04-25T18:00:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7rgyv/tiny_agents_a_mcppowered_agent_in_50_lines_of_code/
|
julien_c
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7rgyv
| false | null |
t3_1k7rgyv
|
/r/LocalLLaMA/comments/1k7rgyv/tiny_agents_a_mcppowered_agent_in_50_lines_of_code/
| false | false |
self
| 150 |
{'enabled': False, 'images': [{'id': '7RJdvO2Neb8ID7Ii8L3w8jFYQRdWoyY4RUBauhrp-rs', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=108&crop=smart&auto=webp&s=c54348060b950a3e11b612171e97a921076d711d', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=216&crop=smart&auto=webp&s=482290bb319b37f81dbbde2f7c08c240058700cd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=320&crop=smart&auto=webp&s=fca43ba2e9ba089f2b4f067a4d947fb08f699803', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=640&crop=smart&auto=webp&s=09d0dce63a6cd60077b6242bb1e5e6a8b6411b5f', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=960&crop=smart&auto=webp&s=38466b3ab6762ca3101b8a4b6e2e004e143f67b7', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?width=1080&crop=smart&auto=webp&s=06122b23c2ed1442a2fea5040026e4045379819f', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/fCTs8gI7KvvOKk5o8AQ0g6EQWi7h5KkDI0MBs8uNyiw.jpg?auto=webp&s=7ddb60dbc75dab05316b1ead7530c68050b1df69', 'width': 1200}, 'variants': {}}]}
|
How far can we take quantization aware training (QAT)?
| 52 |
There was a recent post here on a very clever new 11 bit float "format" [DF11](https://www.reddit.com/r/LocalLLaMA/comments/1k7o89n/we_compress_any_bf16_model_to_70_size_during/) that has interesting inferencing time vs. memory tradeoffs compared to BF16. It got me thinking further along a fun topic - what does (smallish) model training look like in \~2 years?
We already have frontier (for their size ๐
) quantization-aware trained models from [Google](https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/), and I suspect most labs will release something similar. But I think we're going to go further:
* It's obvious that there is value from FP16/BF16/INT8 parameters in some blocks and not in others, and a lot of value in clustering parameters that need dynamic range together
* A smaller model (all else being equal) is better for inferencing because memory bandwidth (not compute) is the speed contraint
* Model parameters almost seem like a legacy concept at this point. We would all prefer to spend 17GB of VRAM on [gemma-3-27b-it-qat-q4\_0-gguf](https://huggingface.co/google/gemma-3-27b-it-qat-q4_0-gguf)ย vs. \~24GB of VRAM on [gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) at BF16
So: can we train models with their memory footprint and estimated token generation rate (targeting a reference architecture) as part of the objective function?
My *naive* proposal:
Add memory footprint and a function that approximates token generation rate to the training loss function
Add a differentiable "quantization" parameter for every \~4K of parameters (activation, weights etc.)
During each batch of the forward pass, use the quantization parameter to drop the block of parameters from BF16 to DF11 to INT8 to INT4 probabilistically based on value i.e.
A high value would mostly do the forward pass in BF16, a little in DF11 and very little in INT8/4
A middle value would be mostly INT8 with a little DF11 and INT4
A low value would be mostly INT4
Calculate the average memory footprint and tokens/second rate (again an approximate reference model is fine) and incorporate into the loss, then run the backward pass
This should make the quantization parameter nicely differentiable and trainable (?)
At the end of training freeze blocks of parameters at the quantization level that reflects the final values of the quantization parameter (i.e. a mid value would freeze at INT8)
In theory the model would have learnt to cluster its use of high dynamic range parameters to minimize the use of BF16 and maximize the use of INT8/4
You can imagine training multiple sizes of the same model almost in parallel by varying the cost function
I'll poke at the literature, but I'd appreciate pointers to anything similar that folks have done already (and of course your thoughts on why this naive approach is ... naive).
A really simple first step might be running an optimization exercise like this on an existing model ... but u/danielhanchen might just be all over [that already](https://www.reddit.com/r/LocalLLaMA/comments/1k71mab/unsloth_dynamic_v20_ggufs_llama_4_bug_fixes_kl/).
| 2025-04-25T18:07:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7rnu9/how_far_can_we_take_quantization_aware_training/
|
gofiend
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7rnu9
| false | null |
t3_1k7rnu9
|
/r/LocalLLaMA/comments/1k7rnu9/how_far_can_we_take_quantization_aware_training/
| false | false |
self
| 52 |
{'enabled': False, 'images': [{'id': 'UPh_4CgafUqTh9ZB3bC0-0Msh-CF5QgkiP-Ex1y8M_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=108&crop=smart&auto=webp&s=bf80d9b78a582598ddaf46ebb198ba14da0dfee1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=216&crop=smart&auto=webp&s=1ad09b95d0279438bd66d1d418f3f9e0b207e8d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=320&crop=smart&auto=webp&s=75848e136c8a8aa2ea0df8ba00019ebccfebb3fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=640&crop=smart&auto=webp&s=ed6a861b423ef5ef481e863b5c6947b3cef14c0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=960&crop=smart&auto=webp&s=7486c5a54c0c3728faa8358805c8f52cd7e039fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=1080&crop=smart&auto=webp&s=c783ae4f78dd78266208a25dc198c7d56dcb9de7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?auto=webp&s=b9fa62cfd071dc2a391de1c697f0bfbb56d04afa', 'width': 1200}, 'variants': {}}]}
|
Prompting the Datasets for GRPO
| 4 |
Hey there! I was working with Unsloth GRPO for a while and had found lot of good insights. One thing is promoting the dataset for GRPO training. This link and the docs might help you to learn about prompting.
| 2025-04-25T18:29:19 |
https://www.linkedin.com/pulse/prompting-dataset-grpo-saiteja-goud-eahoe?utm_source=share&utm_medium=member_ios&utm_campaign=share_via
|
Dapper-Night-1783
|
linkedin.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7s6v2
| false | null |
t3_1k7s6v2
|
/r/LocalLLaMA/comments/1k7s6v2/prompting_the_datasets_for_grpo/
| false | false | 4 |
{'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]}
|
|
Anyone tested Decompute BlackBird for local image generation? is it real?
| 1 |
[removed]
| 2025-04-25T18:34:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7sb75/anyone_tested_decompute_blackbird_for_local_image/
|
committedAF
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7sb75
| false | null |
t3_1k7sb75
|
/r/LocalLLaMA/comments/1k7sb75/anyone_tested_decompute_blackbird_for_local_image/
| false | false |
self
| 1 | null |
Whatโs Meta hinting at with this cryptic post? We need Bindy to decode this for us:
| 51 | 2025-04-25T18:38:23 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7seqn
| false | null |
t3_1k7seqn
|
/r/LocalLLaMA/comments/1k7seqn/whats_meta_hinting_at_with_this_cryptic_post_we/
| false | false | 51 |
{'enabled': True, 'images': [{'id': 'TMVjWq-HBgyo2OEYNCiOdrzGZrIrxvhXrkufLDCmNUc', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/w1t0tdarz0xe1.jpeg?width=108&crop=smart&auto=webp&s=a3c207c5ff4589b0d9ff6a247741d92410cb77d8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/w1t0tdarz0xe1.jpeg?width=216&crop=smart&auto=webp&s=8fda5e76142d71e405741cae77880e887f9a92cd', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/w1t0tdarz0xe1.jpeg?width=320&crop=smart&auto=webp&s=3fd9430af59cc14cd5dd6c0abc7a48a88c61893f', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/w1t0tdarz0xe1.jpeg?width=640&crop=smart&auto=webp&s=c52aab51f1bacd76acfaa9ceb42eb78619be9fdf', 'width': 640}], 'source': {'height': 910, 'url': 'https://preview.redd.it/w1t0tdarz0xe1.jpeg?auto=webp&s=4cc3c006bdadb49f14533b40b55aac1e39f5a362', 'width': 910}, 'variants': {}}]}
|
|||
I built a debugging MCP server that saves me ~2 programming hours a day
| 103 |
Hi!
Deebo is an agentic debugging system wrapped in an MCP server, so it acts as a copilot for your coding agent.
Think of your main coding agent as a single threaded process. Deebo introduces multi threadedness to AI-assisted coding. You can have your agent delegate tricky bugs, context heavy tasks, validate theories, run simulations, etc.
The cool thing is the agents inside the deebo mcp server USE mcp themselves! They use git and file system MCP tools in order to actually read and edit code. They also do their work in separate git branches which provides natural process isolation.
Deebo scales to production codebases, too. I took on a tinygrad bug bounty with me + Cline + Deebo with no previous experience with the tinygrad codebase. Deebo spawned 17 scenario agents over multiple OODA loops, and synthesized 2 valid fixes! You can read the [session logs here](https://github.com/snagasuri/deebo-prototype/tree/master/memory-bank/9bd38e9840d3/sessions/session-1744006973678) and see [the final fix here](https://github.com/snagasuri/deebo-prototype/blob/master/memory-bank/9bd38e9840d3/progress.md).
If youโve ever gotten frustrated with your coding agent for looping endlessly on a seemingly simple task, you can install Deebo with a one line npx deebo-setup@latest. The code is fully open source! Take a look at the code! [https://github.com/snagasuri/deebo-prototype](https://github.com/snagasuri/deebo-prototype)
I came up with all the system design, implementation, etc. myself so if anyone wants to chat about how Deebo works/has any questions I'd love to talk! Would highly appreciate your guys feedback! Thanks!
| 2025-04-25T18:55:29 |
https://github.com/snagasuri/deebo-prototype
|
klawisnotwashed
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7stfg
| false | null |
t3_1k7stfg
|
/r/LocalLLaMA/comments/1k7stfg/i_built_a_debugging_mcp_server_that_saves_me_2/
| false | false | 103 |
{'enabled': False, 'images': [{'id': 'hn9xWqQTF0x9anFqblEFc-v0HO3hF-xjJRtspn3UO5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=108&crop=smart&auto=webp&s=0bc04bf4bde3e48e1182d40d6d0dab31e01c6440', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=216&crop=smart&auto=webp&s=93a606113881ebed323e06b0262ce3627be3e501', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=320&crop=smart&auto=webp&s=46ab702965ce7c1b7dc2dc57ac9b028b6c68c169', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=640&crop=smart&auto=webp&s=b7b677b4629df76504b2adac01e7283245681d54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=960&crop=smart&auto=webp&s=b0097752b836fa169c815565bdd04d6fc1755fde', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?width=1080&crop=smart&auto=webp&s=b89b538d5e202224e6503153d22e7c7853a5df50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O7iRdpSAOTXAp8ugLZRXApZZDYusuN_fEGs3eP-yzAo.jpg?auto=webp&s=60c22fbc5a9131a09a2afa0d3c9d81aab796f54f', 'width': 1200}, 'variants': {}}]}
|
|
Latest ExecuTorch release includes windows support, packages for iOS and Android and a number of new models
| 14 |
[ExecuTorch](https://github.com/pytorch/executorch) still appears to have the best performance on mobile and todays release comes with drop in packages for [iOS](https://pytorch.org/executorch/main/using-executorch-ios.html#integration) and [Android](https://pytorch.org/executorch/0.6/using-executorch-android.html#using-aar-from-maven-central).
Also includes Ph14, Qwen 2.5 and SmolLm2
| 2025-04-25T19:00:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7sxko/latest_executorch_release_includes_windows/
|
Vegetable_Sun_9225
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7sxko
| false | null |
t3_1k7sxko
|
/r/LocalLLaMA/comments/1k7sxko/latest_executorch_release_includes_windows/
| false | false |
self
| 14 |
{'enabled': False, 'images': [{'id': 'lLVZbgIjAIsypqhFrx11OA6YmXU0MMMr1ji-re1jB-E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=108&crop=smart&auto=webp&s=449413f4892fc02d1bed5f0221c48ca9e81b8157', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=216&crop=smart&auto=webp&s=0aa70f1942dadcbfb4d6f25d6fb7a5e6e2721c49', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=320&crop=smart&auto=webp&s=03d55843f166def49c8c15eadaf5ad997366a577', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=640&crop=smart&auto=webp&s=4b707a57849d145c5788623bf4a9d0f3379226d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=960&crop=smart&auto=webp&s=c8671e99912a68fb5fadaec3c501c9dc41479a50', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?width=1080&crop=smart&auto=webp&s=0b38527d5a218ed87dc72f16665abae8c37b14eb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3eNMZqNutNuRVabpve4HKaDez7UHNCn2ARrR0Ss9B7I.jpg?auto=webp&s=a4069f754bf0ae465f9ae884e46906f82b9dc496', 'width': 1200}, 'variants': {}}]}
|
Any possibility for Small size models of Llama 3.3 & 4 in future?
| 24 |
I'm part of No/Poor GPU club. My old laptop doesn't have GPU at all. Friend's laptop has 8GB VRAM. Time to time I use his laptop only for LLM stuff.
I use small size models till 3.2 version. Then both later versions came with large models. (Frankly expected 10-15B models from 3.3 or 4 Versions).
I know Meta won't touch 3.3 version anymore & hereafter won't release small model for 4 version. I don't think in future we'll get small models from Meta.
So any possibility of small size models from 3.3 or 4 versions models by some other way? Hope someday some legends do this & uploads small models to HuggingFace for same.
|Llama|Parameters|
|:-|:-|
||
|**Llama 3**|**8B** 70.6B|
|**Llama 3.1**|**8B** 70.6B 405B|
|**Llama 3.2**|**1B 3B 11B** 90B|
|Llama 3.3|70B|
|Llama 4|109B 400B 2TLlama|
Thanks.
| 2025-04-25T19:03:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7t089/any_possibility_for_small_size_models_of_llama_33/
|
pmttyji
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7t089
| false | null |
t3_1k7t089
|
/r/LocalLLaMA/comments/1k7t089/any_possibility_for_small_size_models_of_llama_33/
| false | false |
self
| 24 | null |
Deepseek r2 when?
| 94 |
I hope it comes out this month, i saw a post that said it was gonna come out before May..
| 2025-04-25T19:10:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7t6dm/deepseek_r2_when/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7t6dm
| false | null |
t3_1k7t6dm
|
/r/LocalLLaMA/comments/1k7t6dm/deepseek_r2_when/
| false | false |
self
| 94 | null |
GLM-4-9B(Q5_K_L) Heptagon Balls sim (multi-prompt)
| 94 |
Title pretty much says it but just to clarify - it wasn't one-shot. It was prompt->response->error, then this:
Here is an error after running the sim:
<error>
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\username\anaconda3\Lib\tkinter_init_.py", line 1967, in call
return self.func(*args)
^^^^^^^^^^^^^^^^
File "C:\Users\username\anaconda3\Lib\tkinter_init_.py", line 861, in callit
func(*args)
File "c:\Users\username\VSCodeProjects\model_tests\balls\GLM49B_Q5KL_balls.py", line 140, in update
current_time_ms = float(current_time)
^^^^^^^^^^^^^^^^^^^
ValueError: could not convert string to float: 'after#2'
</error>
Now think as hard as you can about why this is happening. Look at the entire script and consider how the parts work together. You are free to think as long as you need if you use thinking tags like this:
<think>thoughts here</think>.
Once finished thinking, just provide the patch to the code. No need to rewrite it all.
Then I applied the fix, got another error, replaced the original Assistant code block with the new code and presented the new error as if it were the 1st error by editing my message. I think that resulted in the working version.
So TL;DR - couple of prompts to get it working.
Simply pasting error after error did not work, but structured prompting with a bit of thinking seems to bring out some more potential.
Just thought I'd share in case it helps people with prompting it and just to show that it is not a bad model for it's size. The result is very similar to the 32B version.
| 2025-04-25T19:22:16 |
https://v.redd.it/zrjvo8ve71xe1
|
danihend
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7tg8n
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/zrjvo8ve71xe1/DASHPlaylist.mpd?a=1748200949%2CNjkxNWQyYjBkYzdkMWM4ZDY1ZTVhZDE3NTc5OTI5ZmM1ZGRiNGM4OWMxMzI1N2ZkOWE5OWM1M2EwMzkyMjE3NA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/zrjvo8ve71xe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/zrjvo8ve71xe1/HLSPlaylist.m3u8?a=1748200949%2CYmIxNTY5NDc1MGI4MjI1OTBiOTE5NDM5OWU5ODRiOWM2NTNmOGI1MjBiYWIzZjIyMWUzYzZlOGYxZGY2NWFlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zrjvo8ve71xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1k7tg8n
|
/r/LocalLLaMA/comments/1k7tg8n/glm49bq5_k_l_heptagon_balls_sim_multiprompt/
| false | false | 94 |
{'enabled': False, 'images': [{'id': 'M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a7e9829385a0e4e08f2e6728a4ebe67c97eb2b9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=216&crop=smart&format=pjpg&auto=webp&s=40b2df0ce2ec42f41e083360a13b9337c1a6687d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb908b876cd5d4542a6a6af2639f4c5d22d58e23', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=640&crop=smart&format=pjpg&auto=webp&s=5df806e1625ed0b36a23c059502b00dd32fd0e5e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=960&crop=smart&format=pjpg&auto=webp&s=a5a5741e2915673a69e125f7909b3bf0e541af74', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=075cad9754b1c8d183f0a3aa1c061c115cb293a6', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/M3Z4eDhhdmU3MXhlMYg3hh2y6NN7WC_nAJWjhF3jltCetUE7ORI41iUNIAJC.png?format=pjpg&auto=webp&s=a5ddf8495ba8ad5781b118d0efd31389db1833ad', 'width': 1280}, 'variants': {}}]}
|
|
Up to date guides to build llama.cpp on Windows with AMD GPUs?
| 5 |
The more detailed it is, the better.
| 2025-04-25T19:35:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7trav/up_to_date_guides_to_build_llamacpp_on_windows/
|
Chimpampin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7trav
| false | null |
t3_1k7trav
|
/r/LocalLLaMA/comments/1k7trav/up_to_date_guides_to_build_llamacpp_on_windows/
| false | false |
self
| 5 | null |
Trained the tiny stories dataset on a 12M parameter model.
| 61 |
Trained a 12M Parameter model on the tiny stories dataset.
\*\*GPU used is an Nvidia 4080\*\*
[https://huggingface.co/datasets/roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
I played some video games while it was running off and on so it probably would've finished a bit earlier around 45 hours or so.
I think for smaller models, if you go past the Chinchilla Scaling Law of using 20 tokens per parameter, you can see improvements. This becomes less and less as the model is scaled up though I believe.
(Though maybe bigger models would actually benefit to but the compute becomes ridiculous and gains might be much lower than smaller models)
P.S. The stories aren't the best (lol), but they are pretty coherent.
Configuration info below.
config = LlamaConfig(
vocab\_size=vocab\_size,
hidden\_size=384,
intermediate\_size=768,
num\_hidden\_layers=8,
num\_attention\_heads=8,
max\_position\_embeddings=6000,
rms\_norm\_eps=1e-5,
initializer\_range=0.02,
use\_cache=True,
tie\_word\_embeddings=False,
attention\_dropout=0.1,
hidden\_dropout=0.1,
)
training\_args = TrainingArguments(
output\_dir=output\_dir,
overwrite\_output\_dir=False,
num\_train\_epochs=1,
per\_device\_train\_batch\_size=8,
gradient\_accumulation\_steps=1,
save\_strategy="steps", # Use steps for saving
save\_steps=5000,
logging\_strategy="steps", # Use steps for logging
logging\_steps=100, # Log training loss frequently for the scheduler
save\_total\_limit=10,
prediction\_loss\_only=True, # Often True for Causal LM if not evaluating metrics like perplexity
learning\_rate=.0008, # Initial learning rate for AdamW
weight\_decay=.05,
fp16=True,
gradient\_checkpointing=True,
max\_grad\_norm=1.0,
\# Evaluation settings (important if using eval\_loss with scheduler later)
evaluation\_strategy="steps" if not disable\_eval else "no",
eval\_steps=5000 if not disable\_eval else None,
report\_to="wandb", # Log to W&B
)
=====================================================================
Training stats below.
{'train\_runtime': 180146.524, 'train\_samples\_per\_second': 35.091, 'train\_steps\_per\_second': 4.386, 'train\_loss': 0.23441845736255604, 'epoch': 3.0}
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 790191/790191 \[50:02:26<00:00, 4.39it/s\]
2025-04-25 13:32:42,894 - INFO - Saving final model and training state...
\*\*\*\*\* train metrics \*\*\*\*\*
epoch = 3.0
total\_flos = 711039651GF
train\_loss = 0.2344
train\_runtime = 2 days, 2:02:26.52
train\_samples\_per\_second = 35.091
train\_steps\_per\_second = 4.386
2025-04-25 13:32:43,067 - INFO - Training completed successfully!
2025-04-25 13:32:43,068 - INFO - Final model saved to: ./llama\_model\_test\\final
=====================================================================
wandb: Run summary:
wandb: eval/loss 0.19124
wandb: eval/runtime 47.0576
wandb: eval/samples\_per\_second 225.022
wandb: eval/steps\_per\_second 28.136
wandb: lr 0.0
wandb: total\_flos 7.634730128676549e+17
wandb: train/epoch 3
wandb: train/global\_step 790191
wandb: train/grad\_norm 0.22934
wandb: train/learning\_rate 0.0
wandb: train/loss 0.1965
wandb: train\_loss 0.23442
wandb: train\_runtime 180146.524
wandb: train\_samples\_per\_second 35.091
wandb: train\_steps\_per\_second 4.386
| 2025-04-25T20:02:05 |
Slaghton
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ue47
| false | null |
t3_1k7ue47
|
/r/LocalLLaMA/comments/1k7ue47/trained_the_tiny_stories_dataset_on_a_12m/
| false | false | 61 |
{'enabled': True, 'images': [{'id': 'sULsBdS6s1sxP7NXfpRXqmPRQWmVLmF18hu_RXzNzXU', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?width=108&crop=smart&auto=webp&s=0fcd30150590c025b186a60b95107099c9b16d93', 'width': 108}, {'height': 303, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?width=216&crop=smart&auto=webp&s=7e0339ec19e7a1e8761988686eef8feca7c1de03', 'width': 216}, {'height': 449, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?width=320&crop=smart&auto=webp&s=093be7865688a784c4f66a7c342355af3cca76ce', 'width': 320}, {'height': 898, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?width=640&crop=smart&auto=webp&s=fdcb4160b1d2416d1a24263a7cd97dc785946e9f', 'width': 640}, {'height': 1347, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?width=960&crop=smart&auto=webp&s=28c3e4465fd301726c7cefebbc58814d3b49b5e5', 'width': 960}], 'source': {'height': 1371, 'url': 'https://preview.redd.it/qnx9gqc671xe1.png?auto=webp&s=cbcb1df18e1cf147369f9e66807a072baf7f2ecb', 'width': 977}, 'variants': {}}]}
|
||
What model do you use for ERP these days (max 12b please)?
| 4 |
I've been out of LLM's scene for almost a year and I don't know what's new now. Too many models. I don't have time to check every one of those.
Is still Stheno v3.2 the king of ERP?
Thanks in advance.
| 2025-04-25T20:05:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7ugz6/what_model_do_you_use_for_erp_these_days_max_12b/
|
pumukidelfuturo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ugz6
| false | null |
t3_1k7ugz6
|
/r/LocalLLaMA/comments/1k7ugz6/what_model_do_you_use_for_erp_these_days_max_12b/
| false | false |
self
| 4 | null |
Local Copilot Vision alternatives?
| 3 |
I would personally love to have a built in assistant on windows, THAT RAN LOCALLY, to analyze what's on the screen to help me do tasks in Blender, Photoshop, Unreal Engine, etc.
Microsoft calls theirs Copilot Vision. It's not out yet but is in testing.
Is there anything like this being working on for a local model?
| 2025-04-25T20:09:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7uk33/local_copilot_vision_alternatives/
|
BenefitOfTheDoubt_01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7uk33
| false | null |
t3_1k7uk33
|
/r/LocalLLaMA/comments/1k7uk33/local_copilot_vision_alternatives/
| false | false |
self
| 3 | null |
Shrink LLMs by 30% with ZERO accuracy loss via Dynamic-Length Float
| 1 |
[deleted]
| 2025-04-25T20:13:50 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7uo3e
| false | null |
t3_1k7uo3e
|
/r/LocalLLaMA/comments/1k7uo3e/shrink_llms_by_30_with_zero_accuracy_loss_via/
| false | false |
default
| 1 | null |
||
Fingers crossed ๐ค๐ผ
| 0 | 2025-04-25T20:16:41 |
Sea_Sympathy_495
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7uqje
| false | null |
t3_1k7uqje
|
/r/LocalLLaMA/comments/1k7uqje/fingers_crossed/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'wLw4pCqYEiXtU6fh5jnBaOGz6aaF1WgtZTzp4bmj7jc', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/qq4d4mx6h1xe1.png?width=108&crop=smart&auto=webp&s=25ca297bd35376bc6c1ab7d7746ca9d92466d671', 'width': 108}, {'height': 245, 'url': 'https://preview.redd.it/qq4d4mx6h1xe1.png?width=216&crop=smart&auto=webp&s=a43bc27cddcde4dd27146b2866f401a18924318c', 'width': 216}, {'height': 364, 'url': 'https://preview.redd.it/qq4d4mx6h1xe1.png?width=320&crop=smart&auto=webp&s=8f9da53fbeafc4aee7bc510d1cbe155be77dcdd3', 'width': 320}, {'height': 728, 'url': 'https://preview.redd.it/qq4d4mx6h1xe1.png?width=640&crop=smart&auto=webp&s=c20c03376af1bc893ef91987b9db52640ea2a369', 'width': 640}], 'source': {'height': 850, 'url': 'https://preview.redd.it/qq4d4mx6h1xe1.png?auto=webp&s=32ac886e1c6d0ff70440409083619bfc0005cc60', 'width': 747}, 'variants': {}}]}
|
|||
Qwen introduces their mobile app
| 110 | 2025-04-25T20:22:54 |
Vegetable-Practice85
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7uvpm
| false | null |
t3_1k7uvpm
|
/r/LocalLLaMA/comments/1k7uvpm/qwen_introduces_their_mobile_app/
| false | false |
default
| 110 |
{'enabled': True, 'images': [{'id': 'ewjq8s2ei1xe1', 'resolutions': [{'height': 189, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?width=108&crop=smart&auto=webp&s=f94a4fab4bc853d52418f9b6190d69c25e5d8a83', 'width': 108}, {'height': 379, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?width=216&crop=smart&auto=webp&s=2c5a61051064c78ca4e2e728152c833a519033e6', 'width': 216}, {'height': 561, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?width=320&crop=smart&auto=webp&s=8e882fc62d02b956d52b4ae572c56c5e000bc673', 'width': 320}, {'height': 1123, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?width=640&crop=smart&auto=webp&s=1fe3b4f5cf5c69932dece355c02addb1e439cdd0', 'width': 640}, {'height': 1685, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?width=960&crop=smart&auto=webp&s=a4197edec76b9ad6e3f7986adb8658e2ed88645c', 'width': 960}], 'source': {'height': 1768, 'url': 'https://preview.redd.it/ewjq8s2ei1xe1.jpeg?auto=webp&s=2bdc1279dab8abb07b1bde23121410094190f233', 'width': 1007}, 'variants': {}}]}
|
||
LM Studio 0.3.15 with support for GLM-4 models and NVIDIA RTX50-series just got released
| 90 | 2025-04-25T20:25:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7uxxk/lm_studio_0315_with_support_for_glm4_models_and/
|
ispolin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7uxxk
| false | null |
t3_1k7uxxk
|
/r/LocalLLaMA/comments/1k7uxxk/lm_studio_0315_with_support_for_glm4_models_and/
| false | false | 90 | null |
||
New iterative LM Studio Agent - Any Improvement Ideas?
| 1 |
[removed]
| 2025-04-25T21:36:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7wm1l/new_iterative_lm_studio_agent_any_improvement/
|
Carlo_von_Terragon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7wm1l
| false | null |
t3_1k7wm1l
|
/r/LocalLLaMA/comments/1k7wm1l/new_iterative_lm_studio_agent_any_improvement/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'H6lW_x5bvKMLm2e-D5cLCMPD6-R77N1wbm8ExOu90Lw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/JWHajvhts2IeY8-de5iXCY3Ct1-5ak8r14p91GjGnqM.jpg?width=108&crop=smart&auto=webp&s=31df64c21fb8e422221a49e6a801c16929c83c99', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/JWHajvhts2IeY8-de5iXCY3Ct1-5ak8r14p91GjGnqM.jpg?width=216&crop=smart&auto=webp&s=911b9df1b8a2dd075b5da432549becd986b45c0d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/JWHajvhts2IeY8-de5iXCY3Ct1-5ak8r14p91GjGnqM.jpg?width=320&crop=smart&auto=webp&s=235ea57e1af0e81d618d2c2838b31fe1bf725c90', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/JWHajvhts2IeY8-de5iXCY3Ct1-5ak8r14p91GjGnqM.jpg?auto=webp&s=5906f7aea0bd11b2fea258bf47e243de8f8c99ed', 'width': 480}, 'variants': {}}]}
|
Gemini 2.5 Pro Preview (free) gone on open router?
| 0 |
I noticed i cant find gemini 2.5 pro free on open router anymore and also on my ai studio account the quota is also gone for 2.5 pro. Did they make it paid only now?
| 2025-04-25T21:39:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7wo7c/gemini_25_pro_preview_free_gone_on_open_router/
|
deathcom65
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7wo7c
| false | null |
t3_1k7wo7c
|
/r/LocalLLaMA/comments/1k7wo7c/gemini_25_pro_preview_free_gone_on_open_router/
| false | false |
self
| 0 | null |
Cheapest build for 4 x PCI 3.0 and 1TB RAM?
| 8 |
What are the best options here? I am considering buying 4 x 3090 with power limited to 250w each, on a mobo with up to 1TB RAM, for running deepseek in memory, stable diffusion flux, and whatever else... having this setup seems possibly achievable financially and the power draw should be below 1600w - any suggestions? Thanks!
| 2025-04-25T21:56:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7x201/cheapest_build_for_4_x_pci_30_and_1tb_ram/
|
wawawawatikkatikkati
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7x201
| false | null |
t3_1k7x201
|
/r/LocalLLaMA/comments/1k7x201/cheapest_build_for_4_x_pci_30_and_1tb_ram/
| false | false |
self
| 8 | null |
Custom RAG vs premade
| 1 |
[removed]
| 2025-04-25T22:03:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7x7h1/custom_rag_vs_premade/
|
disinton
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7x7h1
| false | null |
t3_1k7x7h1
|
/r/LocalLLaMA/comments/1k7x7h1/custom_rag_vs_premade/
| false | false |
self
| 1 | null |
Custom RAG vs Premade
| 1 |
[removed]
| 2025-04-25T22:13:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7xfhn/custom_rag_vs_premade/
|
disinton
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7xfhn
| false | null |
t3_1k7xfhn
|
/r/LocalLLaMA/comments/1k7xfhn/custom_rag_vs_premade/
| false | false |
self
| 1 | null |
Maverick faster than Scout?!
| 11 |
The other day I was messing around with partial offload on Llama 4,
Noticed that I got higher speeds on Maverick vs scout but figured I had a setting messed up and didn't think anything of it.
Today I'm sitting here and realize that might actually be normal...
Scout is 109B total, 17B active per token and 16 experts:
Works out to about 6B per MOE expert and an 11B shared expert
Maverick is 400B total, 17B active per token and 128 experts
Works out to about 3B per MOE expert and a 14B shared expert
So with a typical GPU that can fully offload the 14B shared expert,
Your CPU on maverick is doing 1/2 the work vs scout.
Does this math check out?
Anyone else noticed Maverick was actually faster than Scout in a GPU + CPU setup?
| 2025-04-25T22:49:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7y7jf/maverick_faster_than_scout/
|
Conscious_Cut_6144
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7y7jf
| false | null |
t3_1k7y7jf
|
/r/LocalLLaMA/comments/1k7y7jf/maverick_faster_than_scout/
| false | false |
self
| 11 | null |
What do you think makes a good creative writing model?
| 9 |
Please be specific, stuff like "just write good no slop lol" is not very specific.
For example, what abilities, would you like the LLM to have? How does your workflow usually look?
| 2025-04-25T23:33:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7z4jp/what_do_you_think_makes_a_good_creative_writing/
|
Sicarius_The_First
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7z4jp
| false | null |
t3_1k7z4jp
|
/r/LocalLLaMA/comments/1k7z4jp/what_do_you_think_makes_a_good_creative_writing/
| false | false |
self
| 9 | null |
Effects of quantisation of task-specific downstream tasks
| 9 |
I did some experimentation for a project Im doing on quantisation and fine-tuning. I wanted a way of doing news significance scoring similar to what [newsminimalist.com](http://newsminimalist.com) did in his work. So I fine-tuned the Llama 3.2 1B parameter model to score significance on news articles. The prompt is some guidelines on how to score significance, some examples, then an injected full news article. You could do this for any article or piece of text. I tested the model performance and memory usage across `BF16, INT8, INT4` .
I wanted to share my findings with people here
https://preview.redd.it/5kwv0u7ak2xe1.png?width=520&format=png&auto=webp&s=f38b3a9dbe9f354898711e8e0129c6ae4efed246
Notably, the performance of the INT4 model on scoring compared to BF16 were very similar on my validation sets. It failed to produce a structure output once but every other time, the results were the exact same.
https://preview.redd.it/l5fjsvamk2xe1.png?width=375&format=png&auto=webp&s=892f403287a7c19e1fff700c17182280bc8f84d3
GT being the ground truth.
Let me know what you guys think
| 2025-04-25T23:58:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7zmfm/effects_of_quantisation_of_taskspecific/
|
mayodoctur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7zmfm
| false | null |
t3_1k7zmfm
|
/r/LocalLLaMA/comments/1k7zmfm/effects_of_quantisation_of_taskspecific/
| false | false | 9 | null |
|
It's been a while since we had new Qwen & Qwen Coder models...
| 125 |
Just saying... ๐
In all seriousness if they need to cook further - let them cook.
| 2025-04-26T00:18:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k801ba/its_been_a_while_since_we_had_new_qwen_qwen_coder/
|
sammcj
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k801ba
| false | null |
t3_1k801ba
|
/r/LocalLLaMA/comments/1k801ba/its_been_a_while_since_we_had_new_qwen_qwen_coder/
| false | false |
self
| 125 | null |
Cannot get OLLAMA running without having to login into my Mac
| 1 |
[removed]
| 2025-04-26T00:19:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1k801zr/cannot_get_ollama_running_without_having_to_login/
|
technofox01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k801zr
| false | null |
t3_1k801zr
|
/r/LocalLLaMA/comments/1k801zr/cannot_get_ollama_running_without_having_to_login/
| false | false |
self
| 1 | null |
Quantization + Distillation Best Practices?
| 10 |
I'm looking into integrating LLMs with video games, but there's some real practical problems:
1. I found that using a 5 bit quant of llama 3.2 3B worked decently for most used cases (even without a Lora), but it ate roughly 3 gigs of vram. That's a lot for a game subsystem and lower quants didn't seem to do well.
2. Generation speed is a major issue if you use it for anything besides chat. The vulkan backend to llama.cpp doesn't handle multiple execution threads and was the only portable one. The newish dynamic backend might help (support cuda and AMD) but usually the AMD one has to target a specific chipset...
I keep seeing awesome reports about super high quality quants, some of which require post quant training and some of which are supposed to support ludicrous inference speeds on cpu (bitnets, anyone?). I mostly care about performance on a narrow subset of tasks (sometimes dynamically switching LORAs).
Does anyone know of some decent guides on using these more advanced quant methods (with or without post quant training) and make a gguf that's llama.cpp compatible at the end?
On a related note, are there any good guides/toolkits for distilling a bigger model into a smaller one? Is "make a text dataset and train on it" the only mainstream supported mode? I would think that training on the entire token output distribution would be a much richer gradient signal?
| 2025-04-26T02:28:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k82h2k/quantization_distillation_best_practices/
|
charlesrwest0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k82h2k
| false | null |
t3_1k82h2k
|
/r/LocalLLaMA/comments/1k82h2k/quantization_distillation_best_practices/
| false | false |
self
| 10 | null |
The Qwen 3 series AI models will be released soon.
| 1 |
[removed]
| 2025-04-26T03:04:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8350h/the_qwen_3_series_ai_models_will_be_released_soon/
|
Consistent-Sugar8531
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8350h
| false | null |
t3_1k8350h
|
/r/LocalLLaMA/comments/1k8350h/the_qwen_3_series_ai_models_will_be_released_soon/
| false | false | 1 | null |
|
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
| 23 |
Source: [https://arxiv.org/abs/2504.13837](https://arxiv.org/abs/2504.13837)

Recent breakthroughs in reasoning-focused large language models (LLMs) like OpenAI-o1, DeepSeek-R1, and Kimi-1.5 have largely relied onย *Reinforcement Learning with Verifiable Rewards*ย (RLVR), which replaces human annotations with automated rewards (e.g., verified math solutions or passing code tests) to scale self-improvement. While RLVR enhances reasoning behaviors such as self-reflection and iterative refinement, we challenge a core assumption:
***Does RLVR actually expand LLMs' reasoning capabilities, or does it merely optimize existing ones?***
By evaluating models viaย *pass@k*, where success requires just one correct solution amongย *k*ย attempts, we uncover that RL-trained models excel at lowย *k*ย (e.g., pass@1) but are consistentlyย *outperformed by base models*ย at highย *k*ย (e.g., pass@256). This demonstrates that RLVRย *narrows the model's exploration*, favoring known high-reward paths instead of discovering new reasoning strategies. Crucially, all correct solutions from RL-trained models already exist in the base model's distribution, proving RLVR enhancesย *sampling efficiency*, not reasoning capacity, while inadvertently shrinking the solution space.
[The effect of RLVR on LLM's reasoning ability. Search trees are generated by repeated sampling from the base and RLVR-trained models for a given problem. Grey indicates paths that are unlikely to be sampled by the model, while black indicates paths that are likely to be sampled. Green indicates correct paths, which has positive rewards. Our key finding is that all reasoning paths in the RLVR model are already present in the base model. For certain problems like Problem A, RLVR training biases the distribution toward rewarded paths, improving sampling efficiency. However, this comes at the cost of reduced scope of reasoning capacity: For other problems like Problem B, the base model contains the correct path, whereas that of the RLVR model does not.](https://reddit.com/link/1k83moy/video/sb8m5ckim3xe1/player)
# Conclusion
1. **RL-trained models perform worse than base models in pass@*****k***ย **at large k values.** While RL-trained models outperform base models at low sampling sizes (smallย *k*), base models consistently surpass them at largerย *k*ย across all benchmarks, even achieving higher pass@*k*ย scores. Manual inspection reveals that base models can solve problems thought to require RL training by generating diverse reasoning paths, with at least one correct solution per problem. This indicates that RL training does not enhanceโand may even limitโthe full reasoning potential of LLMs compared to aggressive sampling in the base model.
2. **RL boosts sampling efficiency but reduces the reasoning capacity boundary.** The analysis reveals that RLVR-trained models generate reasoning paths already within the base model's output distribution, meaning RLVR biases the model toward higher-rewarded solutions rather than creating entirely new reasoning abilities. However, this focus on rewarded paths reduces the model's exploration capacity, limiting its coverage of solvable problems at larger sampling sizes. These findings suggest that RLVR does not fundamentally transcend the base model's reasoning capabilities but instead optimizes existing pathways at the cost of broader problem-solving diversity.
3. **RLVR algorithms perform similarly and remain far from optimal.** The study compares various RL algorithms (PPO, GRPO, Reinforce++) and finds their performance differences minor, as measured by the sampling efficiency gap (โSE), which assesses how close they get to optimal sampling efficiency. Despite slight variations in โSE among algorithms, the gap remains large across all methods. This indicates that current RL approaches, focused on improving sampling efficiency, still fall far short of optimal performance.
4. **RLVR and distillation are fundamentally different.** While RL improves sampling efficiency, distillation can genuinely introduce new knowledge into the model. As a result, distilled models often exhibit an expanded scope of reasoning capability beyond that of the base model by learning from distilled models, in contrast to RLVR-trained models whose capacity remains bounded by the base.
# ย Q&A
Recent breakthroughs in reasoning-focused large language models (LLMs) like OpenAI-o1, DeepSeek-R1, and Kimi-1.5 have largely relied onย *Reinforcement Learning with Verifiable Rewards*ย Recent breakthroughs in reasoning-focused large language models (LLMs) like OpenAI-o1, DeepSeek-R1, and Kimi-1.5 have largely relied onย *Reinforcement Learning with Verifiable Rewards*ย (RLVR), which replaces human annotations with automated rewards (e.g., verified math solutions or passing code tests) to scale self-improvement. While RLVR enhances reasoning behaviors such as self-reflection and iterative refinement, we challenge a core assumption:
***Does RLVR actually expand LLMs' reasoning capabilities, or does it merely optimize existing ones?***
By evaluating models viaย *pass@k*, where success requires just one correct solution amongย *k*ย attempts, we uncover that RL-trained models excel at lowย *k*ย (e.g., pass@1) but are consistentlyย *outperformed by base models*ย at highย *k*ย (e.g., pass@256). This demonstrates that RLVRย *narrows the model's exploration*, favoring known high-reward paths instead of discovering new reasoning strategies. Crucially, all correct solutions from RL-trained models already exist in the base model's distribution, proving RLVR enhancesย *sampling efficiency*, not reasoning capacity, while inadvertently shrinking the solution space.
Video: The effect of RLVR on LLM's reasoning ability. Search trees are generated by repeated sampling from the base and
RLVR-trained models for a given problem. Grey indicates paths that are unlikely to be sampled by the model, while black
indicates paths that are likely to be sampled. Green indicates correct paths, which has positive rewards.
Our key finding is that all reasoning paths in the RLVR model are already present in the base model.
For certain problems like Problem A, RLVR training biases the distribution toward rewarded paths, improving sampling
efficiency. However, this comes at the cost of reduced scope of reasoning capacity: For other problems like Problem B,
the base model contains the correct path, whereas that of the RLVR model does not.
# Conclusion
1. **RL-trained models perform worse than base models in pass@*****k***ย **at large k values.** While RL-trained models outperform base models at low sampling sizes (smallย *k*), base models consistently surpass them at largerย *k*ย across all benchmarks, even achieving higher pass@*k*ย scores. Manual inspection reveals that base models can solve problems thought to require RL training by generating diverse reasoning paths, with at least one correct solution per problem. This indicates that RL training does not enhanceโand may even limitโthe full reasoning potential of LLMs compared to aggressive sampling in the base model.
2. **RL boosts sampling efficiency but reduces the reasoning capacity boundary.** The analysis reveals that RLVR-trained models generate reasoning paths already within the base model's output distribution, meaning RLVR biases the model toward higher-rewarded solutions rather than creating entirely new reasoning abilities. However, this focus on rewarded paths reduces the model's exploration capacity, limiting its coverage of solvable problems at larger sampling sizes. These findings suggest that RLVR does not fundamentally transcend the base model's reasoning capabilities but instead optimizes existing pathways at the cost of broader problem-solving diversity.
3. **RLVR algorithms perform similarly and remain far from optimal.** The study compares various RL algorithms (PPO, GRPO, Reinforce++) and finds their performance differences minor, as measured by the sampling efficiency gap (โSE), which assesses how close they get to optimal sampling efficiency. Despite slight variations in โSE among algorithms, the gap remains large across all methods. This indicates that current RL approaches, focused on improving sampling efficiency, still fall far short of optimal performance.
4. **RLVR and distillation are fundamentally different.** While RL improves sampling efficiency, distillation can genuinely introduce new knowledge into the model. As a result, distilled models often exhibit an expanded scope of reasoning capability beyond that of the base model by learning from distilled models, in contrast to RLVR-trained models whose capacity remains bounded by the base.
​
u/article{yue2025limit-of-rlvr,
title={Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?},
author={Yue, Yang and Chen, Zhiqi and Lu, Rui and Zhao, Andrew and Wang, Zhaokai and Yue, Yang and Song, Shiji and Huang, Gao},
journal={arXiv preprint arXiv:2504.13837},
year={2025}
}
| 2025-04-26T03:31:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k83moy/does_reinforcement_learning_really_incentivize/
|
ninjasaid13
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k83moy
| false | null |
t3_1k83moy
|
/r/LocalLLaMA/comments/1k83moy/does_reinforcement_learning_really_incentivize/
| false | false |
self
| 23 | null |
Hardware question for general AI/LLM. Would running 2x 5070 Ti 16GB on pcie5 x8 (versus x16) slow things down a lot?
| 2 |
So I am struggling to build a simple system to hold 2x 5070 Ti 16GB cards as none of the modern consumer CPUs have enough PCIe5 lanes to run both cards at x16.
Since these run at pcie 5, and I heard that pcie4 x16 is 1% reduction at most in speeds, then does it make sense that pcie5 x8 should work just fine?
Any thoughts?
Thanks!!
| 2025-04-26T03:47:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k83wx1/hardware_question_for_general_aillm_would_running/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k83wx1
| false | null |
t3_1k83wx1
|
/r/LocalLLaMA/comments/1k83wx1/hardware_question_for_general_aillm_would_running/
| false | false |
self
| 2 | null |
Any Local AI interfaces with a mobile app?
| 4 |
I'm currently using Open WebUI for the frontend to my local AI but I'm wondering if there are any alternatives that may offer a mobile app. I know I can "install" the web app onto the phone but it's not really the same experience.
I'm interested in finding a mobile app for my local AI since I regularly find myself using the chatgpt or claude app to start a chat when I get an idea almost like taking notes.
| 2025-04-26T04:04:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k847kk/any_local_ai_interfaces_with_a_mobile_app/
|
C_Coffie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k847kk
| false | null |
t3_1k847kk
|
/r/LocalLLaMA/comments/1k847kk/any_local_ai_interfaces_with_a_mobile_app/
| false | false |
self
| 4 | null |
Anyone else trying to grow a brand/channel around localllama models?
| 1 |
[removed]
| 2025-04-26T04:43:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k84usp/anyone_else_trying_to_grow_a_brandchannel_around/
|
paswut
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k84usp
| false | null |
t3_1k84usp
|
/r/LocalLLaMA/comments/1k84usp/anyone_else_trying_to_grow_a_brandchannel_around/
| false | false |
self
| 1 | null |
5tps with Llama 4 Scout via Ollama and Unsloth dynamic quants, CPU only
| 20 |
I noticed that the llama 4 branch was just merged into ollama main, so I updated ollama and grabbed the 2.71 bit unsloth dynamic quant:
> ollama run --verbose hf.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF:Q2_K_XL
It works!
```
total duration: 2m7.090132071s
load duration: 45.646389ms
prompt eval count: 91 token(s)
prompt eval duration: 4.847635243s
prompt eval rate: 18.77 tokens/s
eval count: 584 token(s)
eval duration: 2m2.195920773s
eval rate: 4.78 tokens/s
```
42GB is the size of the model, and it is much faster (of course) than equivalent 70B Q4 that is also 42GB on disc.
CPU is Ryzen 7, 64GB
Feels lightning fast for CPU only compared to even 27-32B models.
First test questions worked great as well.
Looking forward to using this; I've been hoping for a large MoE with small experts for a while, very excited.
| 2025-04-26T05:26:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k85izg/5tps_with_llama_4_scout_via_ollama_and_unsloth/
|
RobotRobotWhatDoUSee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k85izg
| false | null |
t3_1k85izg
|
/r/LocalLLaMA/comments/1k85izg/5tps_with_llama_4_scout_via_ollama_and_unsloth/
| false | false |
self
| 20 |
{'enabled': False, 'images': [{'id': '3AmxYHMwAdiM2hDtWVZPEWZvg2_Z102fQxSK5X1fajc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=108&crop=smart&auto=webp&s=5363f3583fcc56f3337d645fddb3611d2d133aaa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=216&crop=smart&auto=webp&s=ac145604c03b19a0eb12d64e2ff6c1cadab1c1cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=320&crop=smart&auto=webp&s=6115798948e91a5f3e0274113022d2f4b8c070eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=640&crop=smart&auto=webp&s=9c31ad71ff3ca9c18c0417c100b2367b4da777fe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=960&crop=smart&auto=webp&s=5a736feb53d8ee903bbceec7083bcc2c34a40461', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?width=1080&crop=smart&auto=webp&s=a3c74e8bb87b3ec05706deb1db2f762be9b45d74', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-hrEFPsx59rVU7I89LQDIkH9H9UHvaNrXWc-3BovSyI.jpg?auto=webp&s=5205f4b198f60c8d0fbd84f25d15fc8bfc427672', 'width': 1200}, 'variants': {}}]}
|
Qwen AI - My most used LLM!
| 151 |
I use Qwen, DeepSeek, paid ChatGPT, and paid Claude. I must say, i find myself using Qwen the most often. It's great, especially for a free model!
I use all of the LLMs for general and professional work. E.g., writing, planning, management, self-help, idea generation, etc. For most of those things, i just find that Qwen produces the best results and requires the least rework, follow ups, etc. I've tested all of the LLMs by putting in the exact same prompt (i've probably done this a couple dozen times) and overall (but not always), Qwen produces the best result for me. I absolutely can't wait until they release Qwen3 Max! I also have a feeling DeepSeek is gonna go with with R2...
Id love to know what LLM you find yourself using the most, what you use them for (that makes a big difference), and why you think that one is the best.
| 2025-04-26T05:56:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8601g/qwen_ai_my_most_used_llm/
|
Glittering-Cancel-25
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8601g
| false | null |
t3_1k8601g
|
/r/LocalLLaMA/comments/1k8601g/qwen_ai_my_most_used_llm/
| false | false |
self
| 151 | null |
best offline model for summarizing large legal texts in French ?
| 3 |
Hi, title says it all. Still a bit new to the whole AI LLM business (guess I've been living under a rock right ?).
So anyways, any recommendations for offline locally run LLMs especially trained for summarizing official, legal texts in non-English languages, mainly French ?
Running MacOS on Silicon machine, so i suppose i need GGUF models, is that correct ?
| 2025-04-26T06:15:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k86agt/best_offline_model_for_summarizing_large_legal/
|
greenreddits
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k86agt
| false | null |
t3_1k86agt
|
/r/LocalLLaMA/comments/1k86agt/best_offline_model_for_summarizing_large_legal/
| false | false |
self
| 3 | null |
I trained a 1M parameters GPT model from scratch, how cooked? Need help..
| 1 |
[removed]
| 2025-04-26T06:40:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k86o1y/i_trained_a_1m_parameters_gpt_model_from_scratch/
|
SrijSriv211
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k86o1y
| false | null |
t3_1k86o1y
|
/r/LocalLLaMA/comments/1k86o1y/i_trained_a_1m_parameters_gpt_model_from_scratch/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'TmAMym4fGoejYBqWUUXysYc-aleil1RiNdn1lA0e5tE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=108&crop=smart&auto=webp&s=e823d496cea983c3d1067015358827b2c36e28ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=216&crop=smart&auto=webp&s=d9b8ab129ed74ca54c1aed7d928aef5fcd8570e2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=320&crop=smart&auto=webp&s=e128083e17ba977f1cf977b4932e1e0e35feb368', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=640&crop=smart&auto=webp&s=a04a7b8c3c91e0d21e04221641413268395b88d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=960&crop=smart&auto=webp&s=4401548a33b267cf72afd16ecf801a7e200fbf0a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?width=1080&crop=smart&auto=webp&s=48661d62ba0c7c10bc779cf824e366f532b891b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YlgJQyCkxiVIxbi4MEvb7IDPg3BzQKwEPxV3MeIh4mI.jpg?auto=webp&s=e93bf5841792a3c145eaf66b312641062cf1e036', 'width': 1200}, 'variants': {}}]}
|
|
Continual Thought
| 1 |
[removed]
| 2025-04-26T06:55:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k86vxf/continual_thought/
|
AdWinter8676
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k86vxf
| false | null |
t3_1k86vxf
|
/r/LocalLLaMA/comments/1k86vxf/continual_thought/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'rVHNb2VZvirwZG8MHlqE8EuQVtDGm_IQUUVzaIPRR4k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8q5O5Y9KEiMTQIg5SWTUcwkm5XGINrUj8mPAnlOVrlM.jpg?width=108&crop=smart&auto=webp&s=0cadc91d993cf002529ede8c1df9a664e4189e58', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/8q5O5Y9KEiMTQIg5SWTUcwkm5XGINrUj8mPAnlOVrlM.jpg?width=216&crop=smart&auto=webp&s=0fa1d323f87953b57cc45b042912f1bace173306', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/8q5O5Y9KEiMTQIg5SWTUcwkm5XGINrUj8mPAnlOVrlM.jpg?width=320&crop=smart&auto=webp&s=8d5f593490515014ba98da4a5a820a84f2b0a389', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/8q5O5Y9KEiMTQIg5SWTUcwkm5XGINrUj8mPAnlOVrlM.jpg?auto=webp&s=779ba69da6927573ce3a102402a6fa2f9400d827', 'width': 480}, 'variants': {}}]}
|
Has anyone evaluated if reasoning models are better because CoT or because theyโve been trained for longer than the base models
| 2 |
As far I understand The โCoT reinforcement learningโ thatโs done to OpenAiโs o1 model or Deepseek R1, for example, works like this: the model is given a question. It produces several answers along with corresponding CoTs in the hope that at least one the guesses is correct. An external tool checks the answer and marks the correct one. The correct answer is used to reinforce the modelโs weights.
It can also be that the โquestion->answer->verificationโ is just a synthetic data generation pipeline, the data from which can used to finetune base models without the CoT included.
For example, suppose o1 was created from 4o. What if we use the (verified) data generated during RL and use it as simple supervised fine tuning of 4o instead.
| 2025-04-26T07:10:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8748u/has_anyone_evaluated_if_reasoning_models_are/
|
grey-seagull
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8748u
| false | null |
t3_1k8748u
|
/r/LocalLLaMA/comments/1k8748u/has_anyone_evaluated_if_reasoning_models_are/
| false | false |
self
| 2 | null |
How are people converting Gemma 3 loras / models to gguf? Both latest transformers and unsloth seem to be broken for them atm.
| 4 |
[https://github.com/huggingface/transformers/pull/35887](https://github.com/huggingface/transformers/pull/35887)
| 2025-04-26T07:33:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k87gc4/how_are_people_converting_gemma_3_loras_models_to/
|
Different_Fix_2217
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k87gc4
| false | null |
t3_1k87gc4
|
/r/LocalLLaMA/comments/1k87gc4/how_are_people_converting_gemma_3_loras_models_to/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'PCV-AvkYlHnXhI8F3APbPRVQ1m0sapMz6WS-S6qBfCs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=108&crop=smart&auto=webp&s=6bf9fc67604d569d26c0a4d5476344bdc4d67f38', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=216&crop=smart&auto=webp&s=43c6daa8f5b115fc162edba57b88cbf25e99c8f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=320&crop=smart&auto=webp&s=373ebcd28ffab6e17b6232b5a89ecb9ef04bc148', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=640&crop=smart&auto=webp&s=4ce6c30eff39247399f9e6a0fe06a4550bc41b22', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=960&crop=smart&auto=webp&s=ac7670c5aaadc1b4e0c40a6be5b36f93d812ffbf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?width=1080&crop=smart&auto=webp&s=0267811cca5ca96ef654d5e593bae95f87478435', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pw729JFRMRFJ7ZMXxqMgrKxIPRI3Po4AmPUgq4OBJNI.jpg?auto=webp&s=40cfdce98582d22b1d8e565d88794c517d8ee6fb', 'width': 1200}, 'variants': {}}]}
|
System Prompt vs. User Prompt
| 15 |
Hi. What difference does it make, if I split my instructions into a system and user prompt, compared to just writing everything in the user prompt and keeping the system prompt empty or the generic "You are a helpful assistant"?
Assume the instruction is composed of an almost constant part (e.g. here is the data), and a more variable part (the question about the data). Is there any tangible difference in correctness, consistency etc?
And given that OpenAI API allows multiple user messages in the same request (does it?), will it have any benefit to separate a message into multiple user messages?
It's not an interactive scenario, so jailbreaking is not an issue. And for paid models, the tokens are anyways counted for the whole payload at the same rate, right?
Thanks
| 2025-04-26T08:53:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k88k0h/system_prompt_vs_user_prompt/
|
ihatebeinganonymous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k88k0h
| false | null |
t3_1k88k0h
|
/r/LocalLLaMA/comments/1k88k0h/system_prompt_vs_user_prompt/
| false | false |
self
| 15 | null |
Newelle 0.9.5 Released: Internet Access, Improved Document Reading
| 75 |
Newelle 0.9.5 Released! Newelle is an advanced AI assistant for Linux supporting any LLM (Local or Online), voice commands, extensions and much more!
๐ Implemented Web Search with SearXNG, DuckDuckGo, and Tavily
๐ Website Reading: ask questions about websites (Write #url to embed it)
๐ข Improved inline LaTeX support
๐ฃ New empty chat placeholder
๐ Improved Document reading: semantic search will only be done if the document is too long
๐ญ New thinking widget
๐ง Add vision support for llama4 on Groq and possibility to choose provider on OpenRouter
๐ New translations (Traditional Chinese, Bengali, Hindi)
๐ Various bug fixes
Source Code: [https://github.com/qwersyk/Newelle/](https://github.com/qwersyk/Newelle/)
Flathub: [https://flathub.org/apps/io.github.qwersyk.Newelle](https://flathub.org/apps/io.github.qwersyk.Newelle)
| 2025-04-26T09:15:18 |
iTzSilver_YT
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k88v0p
| false | null |
t3_1k88v0p
|
/r/LocalLLaMA/comments/1k88v0p/newelle_095_released_internet_access_improved/
| false | false | 75 |
{'enabled': True, 'images': [{'id': 'aMqEz5bq5HDBQk_VTC8j6MxHkB9ksVnYgVyJsPU29PU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=108&crop=smart&format=png8&s=8111321b84f6371beb08ade45faa7c9c0325ecad', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=216&crop=smart&format=png8&s=68680b2a3cf4aa2c9e287d84962741e86ad98a9c', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=320&crop=smart&format=png8&s=760af6184cff48bc44c7f579db36cc4f0ba43974', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=640&crop=smart&format=png8&s=5175e00e7b720de3698b7cf524f32b707626e9f2', 'width': 640}], 'source': {'height': 445, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?format=png8&s=b92159506d620fda4c54641067a9ad87d6e7fa4b', 'width': 800}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=108&crop=smart&s=4c9b089768b84748980290b0edf631a0ae097200', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=216&crop=smart&s=dd83e36e3ef16f0a89b1476efa8b6f5505f05c00', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=320&crop=smart&s=42c3f148c7c9b0af1fda11175e0ebe65c575eae4', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=640&crop=smart&s=c94def82cef2aab0509b503092cef40a3a4c19f3', 'width': 640}], 'source': {'height': 445, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?s=2762f4280c950e79893b415880b2d816853584fa', 'width': 800}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=108&format=mp4&s=ab91b61839cf8a9c4d2e437f84e0faa626ac835f', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=216&format=mp4&s=d1bcd5300f6cc6a60907e249dbad6f58c63ab282', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=320&format=mp4&s=4e348e092e52e21e0d28b799d3946207ca655c92', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?width=640&format=mp4&s=90853abf24f18b81c10605ca18bea85b3816c28a', 'width': 640}], 'source': {'height': 445, 'url': 'https://preview.redd.it/6n7tbbk5c5xe1.gif?format=mp4&s=de9a62bf7d342ccb5a37395420b50468890a6305', 'width': 800}}}}]}
|
||
Deepseek's Context Window: 128k or 163k? Conflicting Info from OpenRouter vs. Official website
| 1 |
[deleted]
| 2025-04-26T09:28:42 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k891ji
| false | null |
t3_1k891ji
|
/r/LocalLLaMA/comments/1k891ji/deepseeks_context_window_128k_or_163k_conflicting/
| false | false |
default
| 1 | null |
||
Handling Mid-Sentence Pauses in Voice Conversations?
| 12 |
I donโt think this is an LLM/ML problem โ it feels more like an algorithmic issue. Current systems donโt handle natural pauses well. If you pause mid-sentence to think, the model often responds prematurely based only on whatโs been said so far, which disrupts the conversationโs flow. Has anyone found or implemented a solution for this?
| 2025-04-26T09:58:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k89gaa/handling_midsentence_pauses_in_voice_conversations/
|
_lindt_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k89gaa
| false | null |
t3_1k89gaa
|
/r/LocalLLaMA/comments/1k89gaa/handling_midsentence_pauses_in_voice_conversations/
| false | false |
self
| 12 | null |
Current Closed Source Moat for Images, Voice & Code
| 0 |
There's currently a 3 month moat between closed source and open source models for text generation.
I wanted everyone's opinion on the delay between a new SOTA image/voice/code model and an open source equivalent.
Specifically for images, it seems like flux.dev caught up to Dalle-3 (and overtook it in many areas) after about 1year. How long is it until something open source "catches up" to the new GPT4o image generation?
| 2025-04-26T10:02:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k89ili/current_closed_source_moat_for_images_voice_code/
|
bdizzle146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k89ili
| false | null |
t3_1k89ili
|
/r/LocalLLaMA/comments/1k89ili/current_closed_source_moat_for_images_voice_code/
| false | false |
self
| 0 | null |
Rabbit - A dead simple web agent (open source)
| 5 |
Hi LocalLLama,
I built Rabbit SDK; an easy to use web agent Software Development Kit. The SDK comes with sentiment analysis and other functions. I'm using Gemini-flash 2.0. as the default model and want to include an open source model like Llama. I'm asking for feedback on the project.
| 2025-04-26T10:04:12 |
https://github.com/wchisasa/rabbit
|
codemaven_
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k89jbi
| false | null |
t3_1k89jbi
|
/r/LocalLLaMA/comments/1k89jbi/rabbit_a_dead_simple_web_agent_open_source/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'lr1NU7VHRS-fJ8xmU5JoG8-_2VLpfJDRg7LpTXe0FrE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=108&crop=smart&auto=webp&s=4bc87e667c40f912ca300624775ed4edd4055850', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=216&crop=smart&auto=webp&s=2dc69a94040e198b7fee5ccd9006b18d6dca6d30', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=320&crop=smart&auto=webp&s=8452e3a886916b2ee3f061690f96404a8cdb5832', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=640&crop=smart&auto=webp&s=fdfcd6c03aaaf031622a3a07dde126ba9b41dca6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=960&crop=smart&auto=webp&s=b22755ddc3ee743700625900d04b5d2a38799bdd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?width=1080&crop=smart&auto=webp&s=f338e5d5efe970e70289dda459d2079376901313', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pQNpKEFhJoGsZW-OxzrWmarNLSOg2ySt3ftHCut851E.jpg?auto=webp&s=04f6b173c6ccb290fe786880d644f7e3ea027c7b', 'width': 1200}, 'variants': {}}]}
|
|
Lmarena hard auto benchmark v2 results.
| 18 | ERROR: type should be string, got "https://github.com/lmarena/arena-hard-auto\n\n```\n Model Scores (%) CI (%)\n0 o3-2025-04-16 86.1 (-1.1 / +1.1)\n1 gemini-2.5 79.3 (-1.5 / +1.9)\n2 o4-mini-2025-04-16-high 79.2 (-1.2 / +1.5)\n3 o4-mini-2025-04-16 74.8 (-1.4 / +1.4)\n4 gemini-2.5-flash 69.0 (-1.3 / +1.9)\n5 o3-mini-2025-01-31-high 66.5 (-1.9 / +1.4)\n6 claude-3-7-sonnet-20250219-thinking-16k 61.1 (-2.1 / +1.5)\n7 o1-2024-12-17-high 61.0 (-1.6 / +1.8)\n8 deepseek-r1 57.9 (-2.4 / +2.3)\n9 o1-2024-12-17 56.0 (-1.7 / +2.0)\n10 gpt-4.5-preview 50.7 (-1.8 / +1.7)\n11 gpt-4.1 50.7 (-2.3 / +1.9)\n12 o3-mini-2025-01-31 50.0 (-0.0 / +0.0)\n13 gpt-4.1-mini 47.2 (-1.9 / +2.6)\n14 QwQ-32B 43.7 (-2.4 / +2.1)\n15 claude-3-5-sonnet-20241022 33.6 (-1.9 / +1.7) \n16 s1.1-32B 22.2 (-1.6 / +1.6) \n17 llama4-maverick-instruct-basic 17.5 (-1.4 / +1.6) \n18 Athene-V2-Chat 16.5 (-1.0 / +1.5) \n19 gemma-3-27b-it 14.8 (-1.3 / +0.9) \n20 gpt-4.1-nano 14.1 (-1.3 / +1.0) \n21 Llama-3.1-Nemotron-70B-Instruct-HF 10.1 (-0.9 / +0.8) \n22 Qwen2.5-72B-Instruct 10.1 (-0.8 / +1.3) \n23 OpenThinker2-32B 3.1 (-0.2 / +0.4)\n```\n\nInteresting tidbits that apply also on the lmarena benchmark. Emphasis is mine. For example on the part that simple prompts - that could be common in LMarena (check the lmarena explorer) - make two models similar though the models could be vastly different.\n\nOf course LLM judges may be biased as well (there are some papers on this), but I think they are trying to limit the bias as much as they can.\n\n> V2.0 contains 500 fresh, challenging real-world user queries (open-ended software engineering problems, math questions, etc) and 250 creative writing queries sourced from Chatbot Arena. We employs automatic judges, GPT-4.1 and Gemini-2.5, as a cheaper and faster approximator to human preference.\n\n> Following the newly introduced Style Control on Chatbot Arena, we release Style Control on Arena Hard Auto! We employ the same Style Control methods as proposed in the blogpost. Please refer to the blogpost for methodology and technical background. (https://lmsys.org/blog/2024-08-28-style-control/)\n\n> We outline two key properties that the benchmark aiming to approximate human preference should possess to provide meaningful comparisons between models:\n\n> - Separability: the benchmark should separate models with high confidence.\n> - Alignment with Human Preference: the benchmark should agree with human preference.\n\n> While previous works have focused on alignment, separability is also a crucial consideration when comparing models of similar quality (e.g., different checkpoints from the same training run). However, achieving high-confidence separability is challenging due to limitations in prompt design and inherent variances in LLM evaluations. **Overly simplistic prompts fail to distinguish between models**, while the randomness in human and LLM judgments leads to inconsistent predictions. As a result, it is often difficult to confidently determine if a modelโs apparent performance reflects a genuine difference in capability or merely noisy observations, highlighting a need for methods to verify whether a benchmark can reliably separate similar models.\n\n> Statistical measures like Pearson (Pearson, 1895) and Spearman Correlations (Spearman, 1961), commonly used in benchmarks such as AlpacaEval (Li et al., 2023) to measure correlation to human preference ranking, may fail to adequately address model separability and ranking instability. In addition, these measures only provide a coarse signal of ranking correlation without quantifying the magnitude of performance differences between model pairs. To address these shortcomings, we develop three novel metrics: Separability with Confidence, Agreement with Confidence, and Pair Rank Brier Score." | 2025-04-26T10:21:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k89s1u/lmarena_hard_auto_benchmark_v2_results/
|
pier4r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k89s1u
| false | null |
t3_1k89s1u
|
/r/LocalLLaMA/comments/1k89s1u/lmarena_hard_auto_benchmark_v2_results/
| false | false |
self
| 18 |
{'enabled': False, 'images': [{'id': 'iyJ0QLARuYdb-wWX7M6yZYhHKlH2NkJ0NvmbZbcxGTU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=108&crop=smart&auto=webp&s=0d7ce4d055425e338be473bcb021c78c5085e7f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=216&crop=smart&auto=webp&s=2c94a228298821565ffdfe9bc7b520297a30dcfb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=320&crop=smart&auto=webp&s=7dbea69929d625c2b361e2f1918e758f03be4484', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=640&crop=smart&auto=webp&s=16bd2866577810d85cf80f803bdfd8c9e9d5bf43', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=960&crop=smart&auto=webp&s=2b137d2876d46930f84b07edabcc766c2de7568e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?width=1080&crop=smart&auto=webp&s=f11672e3dc122f9aae97626498f64923a52f20fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wpK4fJphryNJdDQ-Yux7XUFVw0MN0ernWNxnzHBpTF4.jpg?auto=webp&s=03cc207e853feec0240a859480f2f8b4904596ba', 'width': 1200}, 'variants': {}}]}
|
meta hiring team is the most useless , retard team right now in the tech world .
| 1 |
[removed]
| 2025-04-26T11:06:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8afip/meta_hiring_team_is_the_most_useless_retard_team/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8afip
| false | null |
t3_1k8afip
|
/r/LocalLLaMA/comments/1k8afip/meta_hiring_team_is_the_most_useless_retard_team/
| false | false | 1 | null |
|
LangoTango - LangoTango - A local language model powered language learning partner
| 1 |
Hi all,
Put this together over the week. It's a fork of another app I made called Dillon, but in this case I optimised it for language learning. It can be forked for all sorts of different hobbies. You could make a fork for personal recipe books or exercise diaries for example.
Here's the repo:
[https://github.com/shokuninstudio/LangoTango](https://github.com/shokuninstudio/LangoTango)
macOS and Windows binaries are ready to download.
If you want to build it for Linux it's easy with pyinstaller and should work. I have not been able to test on Linux as I only have VMs at the moment. I need some drivers (not available) to run Linux native on my laptop.
| 2025-04-26T11:15:42 |
https://www.reddit.com/gallery/1k8al2x
|
shokuninstudio
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8al2x
| false | null |
t3_1k8al2x
|
/r/LocalLLaMA/comments/1k8al2x/langotango_langotango_a_local_language_model/
| false | false | 1 | null |
|
LangoTango - A local language model powered language learning partner
| 81 |
Hi all,
Put this together over the week. It's a fork of another app I made called Dillon, but in this case I optimised it for language learning. It can be forked for all sorts of different hobbies. You could make a fork for personal recipe books or exercise diaries for example.
Here's the repo:
[https://github.com/shokuninstudio/LangoTango](https://github.com/shokuninstudio/LangoTango)
macOS and Windows binaries are ready to download.
If you want to build it for Linux it's easy with pyinstaller and should work. I have not been able to test on Linux as I only have VMs at the moment. I need some drivers (not available) to run Linux native on my laptop.
| 2025-04-26T11:23:55 |
https://www.reddit.com/gallery/1k8aput
|
shokuninstudio
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8aput
| false | null |
t3_1k8aput
|
/r/LocalLLaMA/comments/1k8aput/langotango_a_local_language_model_powered/
| false | false | 81 | null |
|
Which is the best RAG opensource project along with LLM for long context use case?
| 1 |
[removed]
| 2025-04-26T11:55:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8b872/which_is_the_best_rag_opensource_project_along/
|
Ni_Guh_69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8b872
| false | null |
t3_1k8b872
|
/r/LocalLLaMA/comments/1k8b872/which_is_the_best_rag_opensource_project_along/
| false | false |
self
| 1 | null |
Llama 3.3 70B Q40: eval 7.2 tok/s, pred 3.3 tok/s on 4 x NVIDIA RTX 3060 12 GB (GPU cost: $1516)
| 43 | 2025-04-26T12:20:59 |
https://github.com/b4rtaz/distributed-llama/discussions/205
|
b4rtaz
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8bodt
| false | null |
t3_1k8bodt
|
/r/LocalLLaMA/comments/1k8bodt/llama_33_70b_q40_eval_72_toks_pred_33_toks_on_4_x/
| false | false | 43 |
{'enabled': False, 'images': [{'id': 'MhVQ7b7RnDbvRfcz8NJM2jmtfuYO3y7FzPw9qQlnLzc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=108&crop=smart&auto=webp&s=0d2e8e045a90b1602626405cd9af7e2f5eaa1d0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=216&crop=smart&auto=webp&s=2c73ad52461bdaf7bad6beeb9aa4fdb6fbbc6cf4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=320&crop=smart&auto=webp&s=b6cfc9f2706e4b8a9f5d284076486dc81102309a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=640&crop=smart&auto=webp&s=6c09552066bc18c0bbffa03e596104499ea5c504', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=960&crop=smart&auto=webp&s=6bbda9ac96beea4c9267306413ff673bcb2482a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?width=1080&crop=smart&auto=webp&s=660ee45c016758e3e16172690d6502c1538486a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B1TF2IQo6iquhmu5K16ixy7w2XYeK22BYOFGGiutRMc.jpg?auto=webp&s=3485f3902608a3ae9f84c53ef1f92eb12ba9cefa', 'width': 1200}, 'variants': {}}]}
|
||
Anyone experiences using 16 gb vram + 8 gb vram with 27b gemma3-QAT
| 1 |
[removed]
| 2025-04-26T13:04:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8chlw/anyone_experiences_using_16_gb_vram_8_gb_vram/
|
PoemSignificant8436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8chlw
| false | null |
t3_1k8chlw
|
/r/LocalLLaMA/comments/1k8chlw/anyone_experiences_using_16_gb_vram_8_gb_vram/
| false | false |
self
| 1 | null |
Any turnkey dockers for audio translation with voice cloning?
| 4 |
Let's say I have an audio file with a speaker in a source language (say Greek). I'd like to convert this into English and preferably using a clone of the original speaker's voice. Is there any turnkey app/docker that can do this?
| 2025-04-26T13:05:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8chsx/any_turnkey_dockers_for_audio_translation_with/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8chsx
| false | null |
t3_1k8chsx
|
/r/LocalLLaMA/comments/1k8chsx/any_turnkey_dockers_for_audio_translation_with/
| false | false |
self
| 4 | null |
A simple CLI tool for managing and running llama-server
| 7 |
Hi, I mostly made this tool to manage and run my local models and their parameters, mostly for my own use but I share it in case it is useful for someone else. I wish I had a tool like this when I started with local models, so I hope it is helpful!
The purpose of the tool it be very simple to use.
1. Install the pip packages
2. Simply place the llama-server-cli.py file next to your llama-server executable.
3. Run it.
4. Use the interface to point it at the gguf file and start the server, this will use the default parameters.
It will run the server in the background and any changes made to the settings while the server is running will restart the server automatically with the new settings.
You can find it here: https://github.com/R-Dson/llama-server-cli.py
| 2025-04-26T13:19:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8csdj/a_simple_cli_tool_for_managing_and_running/
|
robiinn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8csdj
| false | null |
t3_1k8csdj
|
/r/LocalLLaMA/comments/1k8csdj/a_simple_cli_tool_for_managing_and_running/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'VuXQAEOevlpYS_0P2MjfqcuLovfYRePITUNs1kHlUKY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=108&crop=smart&auto=webp&s=fe9d59e93d6b9df828b71c8f43d3de4b1f93ee9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=216&crop=smart&auto=webp&s=59a0323f605e0c102854f2720e9547bc39b2a69c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=320&crop=smart&auto=webp&s=6ffc4e1e0975a8e8f53cf85facfec6b5456290e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=640&crop=smart&auto=webp&s=fc0403ba12aba22550320ea431f3d8177e087fb1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=960&crop=smart&auto=webp&s=4b59ca1f0cc58d65b78be9292264925e02066ec8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?width=1080&crop=smart&auto=webp&s=69e9b545e129ce99a3c20969eaaffdf6f038643d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UA-i0YPJX83wk9q-GzMLu9ACXjHLuEnP97CPY9A3aCg.jpg?auto=webp&s=cc0bde55859a656b3a9c69e38f28338cc396e7a5', 'width': 1200}, 'variants': {}}]}
|
What graphics card should I buy? Which llama/qwent (etc.) model should I choose? Please help me, I'm a bit lost...
| 1 |
[removed]
| 2025-04-26T13:39:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8d6ce/what_graphics_card_should_i_buy_which_llamaqwent/
|
ed0c
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8d6ce
| false | null |
t3_1k8d6ce
|
/r/LocalLLaMA/comments/1k8d6ce/what_graphics_card_should_i_buy_which_llamaqwent/
| false | false |
self
| 1 | null |
5090 prices in Switzerland normalizing, looking good for local AI?
| 37 |
Have been checking 5090 prices in Switzerland. Found offers as low as CHF 1950.- although sold out very quickly and not up for order, but offer still online. The next one that's available, although with a 28 day lead time is at CHF 2291.-
Do you guys see this as a response to the harsh competition by AMD? Do you see similar trends in your country?
2291.- offer was found on nalda.ch
1950.- offer (they used the 5080 package in the image, but the stats mention the 5090) was found on conrad.ch
| 2025-04-26T13:41:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8d8a3/5090_prices_in_switzerland_normalizing_looking/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8d8a3
| false | null |
t3_1k8d8a3
|
/r/LocalLLaMA/comments/1k8d8a3/5090_prices_in_switzerland_normalizing_looking/
| false | false |
self
| 37 | null |
DeepSeek R2 rumored launch imminent!
| 72 | 2025-04-26T13:53:07 |
Charuru
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8dgos
| false | null |
t3_1k8dgos
|
/r/LocalLLaMA/comments/1k8dgos/deepseek_r2_rumored_launch_imminent/
| false | false | 72 |
{'enabled': True, 'images': [{'id': 'iNa6hyKQmHysMTb1ICU8E00Djd6sk9OY85xHyJ6USug', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=108&crop=smart&auto=webp&s=776f8caa4bfa901e821ea9a5af02000155edc9bf', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=216&crop=smart&auto=webp&s=9212a0ec2ab7492c0365176780479c03b9eee704', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=320&crop=smart&auto=webp&s=36780ea8b624cf76f18ac03c14b6283f9735b148', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=640&crop=smart&auto=webp&s=860f1563a783d548557d7f6898f0843035efffdf', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=960&crop=smart&auto=webp&s=6591440ba4a6df4190310b4b0389491188648ee9', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?width=1080&crop=smart&auto=webp&s=63e619dd4f3fd87fc918469085436f1aef4656ba', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/j6fb5msop6xe1.jpeg?auto=webp&s=d5411e81d49bbcf046f6774948a96eede0d1cd78', 'width': 1080}, 'variants': {}}]}
|
|||
Llama.cpp without huggingface
| 0 |
I issued a post recently on shifting my Llama2 model from huggingface (where it was called via a dedicated inference endpoint) to our local server and some suggested that I should just opt for llama.cpp. Initially I still pursued my initial idea, albeit shifting to Llama-3.2-1b-Instruct due to VRAM limitations (8GB).
It works as it should but it is fairly slow and so I have been revisiting the llama.cpp and the promise to run models much more efficiently and found (amongst others) this intriguing [post](https://blog.steelph0enix.dev/posts/llama-cpp-guide/). However explanations seem to exclusively posit the installation of the underlying model via huggingface, which makes me wonder to what extent it is possible to use llama.cpp with:
(i) the original file parameters downloaded via META
(ii) any custom model that's not coming from any of the big LLM companies.
| 2025-04-26T13:53:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8dh1l/llamacpp_without_huggingface/
|
RDA92
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8dh1l
| false | null |
t3_1k8dh1l
|
/r/LocalLLaMA/comments/1k8dh1l/llamacpp_without_huggingface/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'XZSWdkwcWZ5X_Vd1OYq5ZVrfs--2qDkAa37FQuWsrMk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=108&crop=smart&auto=webp&s=aadb4b6cd053db4a1c45926ead166d1f157ddea6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=216&crop=smart&auto=webp&s=24b5c396fde815646492c083b714eae6c59a1cb0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=320&crop=smart&auto=webp&s=a9c2c59bfd59d5a49f3818cb7b0ed9ec358d6276', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=640&crop=smart&auto=webp&s=316480920221b9fb8928b676452d143067eefc83', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=960&crop=smart&auto=webp&s=2fd3e606dd223730143c5ec9c7411f3f2063c910', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?width=1080&crop=smart&auto=webp&s=88dc734b160100815e095f1ee8b5356d42cc201f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nk-ZrwD8kNSNtnaWrApX2j2M3MkAHx262k3kS0MSMGk.jpg?auto=webp&s=c56adb140f25da1a1c8bdc480d6b6f2650c709cf', 'width': 1200}, 'variants': {}}]}
|
It's really cool now to have an idea, and few hours later you have a working app
| 70 |
I rarely do web development, and without the help of LLMs it would have taken me days to build the frontend and these animations. But after one morning, I already have a cool result.
The idea and the app themselves aren't very original or complex, but here's the source code in case anyone is interested: https://github.com/YofarDev/chapitre
| 2025-04-26T14:05:42 |
https://v.redd.it/vgz7967aq6xe1
|
Nyao
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8dqa7
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vgz7967aq6xe1/DASHPlaylist.mpd?a=1748268357%2COGVmNzllMWNiMzYzMmI0NDliNjQzNzEzZjg1M2EzM2MyNDc3YmNhZGRkYmMyM2UyM2U5NjI4MmE0NWY1NTFjZQ%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/vgz7967aq6xe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vgz7967aq6xe1/HLSPlaylist.m3u8?a=1748268357%2CNzY2OTRlYTk5ODVlMjZiNGFkYTQ1NmE4Y2JkZDBjODA5N2I5M2QwOTc5MWVlODgxNDQwMzU1ODBhYzIyNGJhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vgz7967aq6xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k8dqa7
|
/r/LocalLLaMA/comments/1k8dqa7/its_really_cool_now_to_have_an_idea_and_few_hours/
| false | false | 70 |
{'enabled': False, 'images': [{'id': 'MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=108&crop=smart&format=pjpg&auto=webp&s=12dd3acf9a41122bdc8c3878ad18ce6022c3d6e5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=216&crop=smart&format=pjpg&auto=webp&s=c8fd57ce61d55e71db58eb616a594e93f2b6330f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=320&crop=smart&format=pjpg&auto=webp&s=94006a0029641bbe29c744cc1526462ecf29b9af', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=640&crop=smart&format=pjpg&auto=webp&s=a95aee48ec78c92a29cfaaadd348ea6425844fe4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=960&crop=smart&format=pjpg&auto=webp&s=b10a156691ccc981faac3df31b0a68747d4100d2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8b45590e4becbce82f07ad02bfa3926037341b97', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MzQ1cGE3N2FxNnhlMe_3z2US61k9w-e99dI3sh4KfvfwKeGQ6lAD-f6G97nN.png?format=pjpg&auto=webp&s=66327b93eeb92067cd7d5f51a931e66040c49041', 'width': 1920}, 'variants': {}}]}
|
|
NN Building Tech Questions
| 1 |
Hello community! Iโm trying to do some fun in PyTorch with LLMs and other models. I have a few questions:
1. How do I create a custom projector for any LLM (e.g., Gemma 3 12B)? For example, I have an AI that can produce data in a 768x512-dimensional vector. How can I input that into LLM and infer (plus train beforehand)?
2. I want to create music completion (like T9 on a phone keyboard, but for music). I have both MiDi and MuseXML files. Do you have any suggestions on how I can turn them into defined tokens (e.g., 16th-C2) combining both bass and treble clefs so I donโt need audio?
3. How to create a pseudo-distilled NN model with no much data. Like, letโs do that for audio. I have another NN that takes my audio input, does some magical transformers (any: can be noise cleaning or even voice swap), and then returns complete audio, same 48kHz mono duration the same, just changed. How I can make NN in PyTorch that can take like just an hour of data pairs and can replicate the results. Yes, I know how to built in PyTorch, I just asking maybe there some specific function or whatever for such a task!
Thanks!
| 2025-04-26T14:40:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8ehim/nn_building_tech_questions/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8ehim
| false | null |
t3_1k8ehim
|
/r/LocalLLaMA/comments/1k8ehim/nn_building_tech_questions/
| false | false |
self
| 1 | null |
A one line command to crawl a website (or llms.txt) and get a markdown ready giving it to an LLM.
| 1 |
Hi! I haven't found a super simple tool to get a markdown file generated from a crawling of a website (like some docs) or a llms.txt. Some tools around require to write some code every time, or to pay for the API, like firecrawl.
I built a simple solution with crawl4ai and I put it on pypi, so that, if you use `uv`, you don't have to manually install anything, you can just run:
```bash
uv run \
--with url2llm \
url2llm \
--depth 1 \
--url "https://modelcontextprotocol.io/llms.txt" \
--instruction "I need documents related to developing MCP (model context protocol) servers" \
--provider "gemini/gemini-2.5-flash-preview-04-17" \
--api_key ${GEMINI_API_KEY}
```
and then find `./model-context-protocol-documentation.md`ready to be dropped into ChatGPT/Claude project!
Hope it helps!
| 2025-04-26T15:05:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8f17x/a_one_line_command_to_crawl_a_website_or_llmstxt/
|
Voskot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8f17x
| false | null |
t3_1k8f17x
|
/r/LocalLLaMA/comments/1k8f17x/a_one_line_command_to_crawl_a_website_or_llmstxt/
| false | false |
self
| 1 | null |
Dia-1.6B in Jax to generate audio from text from any machine
| 80 |
I created a JAX port of Dia, the 1.6B parameter text-to-speech model to generate voice from any machine, and would love to get any feedback. Thanks!
| 2025-04-26T15:07:47 |
https://github.com/jaco-bro/diajax
|
Due-Yoghurt2093
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8f38v
| false | null |
t3_1k8f38v
|
/r/LocalLLaMA/comments/1k8f38v/dia16b_in_jax_to_generate_audio_from_text_from/
| false | false | 80 |
{'enabled': False, 'images': [{'id': 'TNWH_V9zY6x78Nhz_X4DJfjPHBjuvxJtQU2mB7hK-8Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=108&crop=smart&auto=webp&s=74e7a0bbbd8761d04f70ecd829b7c5db48d4586c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=216&crop=smart&auto=webp&s=2fc8c82c454427dc00190daa0a532c28a435ed8f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=320&crop=smart&auto=webp&s=f015c9cfffb289b741e712a66e917fc8458a3b76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=640&crop=smart&auto=webp&s=868d202201bf917cbb597d31bb847a1ff2f6b446', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=960&crop=smart&auto=webp&s=47602bb63b8e56fc64d0f392c52eae0c30787009', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?width=1080&crop=smart&auto=webp&s=7a9ea829b4082c22be0c17f56bf440bbd98beccf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OUXDxyqECR3Zwafh-pl-0Sio26N3Ne4UW5G-7VkXL1o.jpg?auto=webp&s=a7b4b20e3c1776133d2557d02e70e643966ba0a5', 'width': 1200}, 'variants': {}}]}
|
|
Hot Take: Gemini 2.5 Pro Makes Too Many Assumptions About Your Code
| 196 |
Gemini 2.5 Pro is probably the smartest model that is publicly available at the moment. But it makes TOO fucking many assumptions about your code that often outright break functionality. Not only that, but it's overly verbose and boilerplate-y. Google really needs to tone it down.
I'll give an example: I had a function which extracts a score from a given string. The correct format is 1-10/10. Gemini randomly decides that this is a bug and modifies the regex to also accept 0/10.
The query was to use the result from the function to calculate the MSE. Nowhere did I specify it to modify the get_score function. Sonnet/DeepSeek do not have that issue by the way.
Thanks for coming to my TED talk. I just needed to vent.
| 2025-04-26T15:21:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8fe14/hot_take_gemini_25_pro_makes_too_many_assumptions/
|
HideLord
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8fe14
| false | null |
t3_1k8fe14
|
/r/LocalLLaMA/comments/1k8fe14/hot_take_gemini_25_pro_makes_too_many_assumptions/
| false | false |
self
| 196 | null |
Gwen AI. What's the catch?
| 1 |
[removed]
| 2025-04-26T15:57:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8g7jz/gwen_ai_whats_the_catch/
|
Sea_Fox_2709
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8g7jz
| false | null |
t3_1k8g7jz
|
/r/LocalLLaMA/comments/1k8g7jz/gwen_ai_whats_the_catch/
| false | false |
self
| 1 | null |
Following a 3-year AI breakthrough cycle
| 1 |
[removed]
| 2025-04-26T16:01:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8gake/following_a_3year_ai_breakthrough_cycle/
|
Key-Preference-5142
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8gake
| false | null |
t3_1k8gake
|
/r/LocalLLaMA/comments/1k8gake/following_a_3year_ai_breakthrough_cycle/
| false | false |
self
| 1 | null |
Did I miss any LLM in this list๐คง๐ฌ
| 1 | 2025-04-26T16:01:53 |
internal-pagal
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8gb40
| false | null |
t3_1k8gb40
|
/r/LocalLLaMA/comments/1k8gb40/did_i_miss_any_llm_in_this_list/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'i9RhwqfJhU0AUPiYxh-bLRZOUnnWSXlGqvLN98fyxpg', 'resolutions': [{'height': 146, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=108&crop=smart&auto=webp&s=013714281586a14b9d64d3466111dcaec18e2174', 'width': 108}, {'height': 292, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=216&crop=smart&auto=webp&s=2b1d8609ffbc5f7f024e2c20240d65e00c315315', 'width': 216}, {'height': 433, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=320&crop=smart&auto=webp&s=3ba590f38582bb2abd9289b66c8ce93df549cfc4', 'width': 320}, {'height': 866, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=640&crop=smart&auto=webp&s=59c2d154703435cb775140716e4981205cb3a11a', 'width': 640}, {'height': 1299, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=960&crop=smart&auto=webp&s=71601fd7717de1f9e05bf6d91f01a135b5f2294b', 'width': 960}, {'height': 1462, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?width=1080&crop=smart&auto=webp&s=07588930cd5ffb1510f6df2012266468b64f9474', 'width': 1080}], 'source': {'height': 1462, 'url': 'https://preview.redd.it/1dgwkbiqc7xe1.png?auto=webp&s=f5c9a46fe87c1ff42d69d4fc84fc680239521e59', 'width': 1080}, 'variants': {}}]}
|
|||
Struggling with DIA-TTS Fine-Tuning for New Languages,please help
| 1 |
[removed]
| 2025-04-26T16:05:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8ge9t/struggling_with_diatts_finetuning_for_new/
|
Cnrgames
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8ge9t
| false | null |
t3_1k8ge9t
|
/r/LocalLLaMA/comments/1k8ge9t/struggling_with_diatts_finetuning_for_new/
| false | false |
self
| 1 | null |
Thoughts and Poking with Gemma3
| 1 |
[removed]
| 2025-04-26T16:32:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8h18h/thoughts_and_poking_with_gemma3/
|
kthepropogation
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8h18h
| false | null |
t3_1k8h18h
|
/r/LocalLLaMA/comments/1k8h18h/thoughts_and_poking_with_gemma3/
| false | false |
self
| 1 | null |
My AI dev prompt playbook that actually works (saves me 10+ hrs/week)
| 329 |
So I've been using AI tools to speed up my dev workflow for about 2 years now, and I've finally got a system that doesn't suck. Thought I'd share my prompt playbook since it's helped me ship way faster.
**Fix the root cause**: when debugging, AI usually tries to patch the end result instead of understanding the root cause. Use this prompt for that case:
Analyze this error: [bug details]
Don't just fix the immediate issue. Identify the underlying root cause by:
- Examining potential architectural problems
- Considering edge cases
- Suggesting a comprehensive solution that prevents similar issues
**Ask for explanations:** Here's another one that's saved my ass repeatedly - the "explain what you just generated" prompt:
Can you explain what you generated in detail:
1. What is the purpose of this section?
2. How does it work step-by-step?
3. What alternatives did you consider and why did you choose this one?
Forcing myself to understand ALL code before implementation has eliminated so many headaches down the road.
**My personal favorite:** what I call the "rage prompt" (I usually have more swear words lol):
This code is DRIVING ME CRAZY. It should be doing [expected] but instead it's [actual].
PLEASE help me figure out what's wrong with it: [code]
This works way better than it should! Sometimes being direct cuts through the BS and gets you answers faster.
The main thing I've learned is that AI is like any other tool - it's all about HOW you use it.
Good prompts = good results. Bad prompts = garbage.
What prompts have y'all found useful? I'm always looking to improve my workflow.
| 2025-04-26T17:00:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8hob9/my_ai_dev_prompt_playbook_that_actually_works/
|
namanyayg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8hob9
| false | null |
t3_1k8hob9
|
/r/LocalLLaMA/comments/1k8hob9/my_ai_dev_prompt_playbook_that_actually_works/
| false | false |
self
| 329 | null |
Split MoE GGUFs for modular quants?
| 18 |
Given the optimizations happening around MoE models such as in Ktransformers and Llama.cpp with custom layer offloading overrides, I was thinking that it would be nice if there were GGUFs where the static parts of the model (the layers that are active every token, which for Llama 4 would be the dense layers and the 1 "shared" expert) are stored in a different file from the non-static parts (the routed experts). This would allow a user to mix and match to optimize for their hardware. Someone with a 12 GB GPU and 96 GB RAM for instance would be able to get a big quant of the static layers, while someone else with a 8 GB GPU but the same RAM could choose a smaller quant of the static, but still get the benefit of the big quant for the non-static layers.
| 2025-04-26T17:52:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8ivb5/split_moe_ggufs_for_modular_quants/
|
Aerikh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8ivb5
| false | null |
t3_1k8ivb5
|
/r/LocalLLaMA/comments/1k8ivb5/split_moe_ggufs_for_modular_quants/
| false | false |
self
| 18 | null |
End-to-end conversation projects? Dia, Sesame, etc
| 22 |
In the past month we've had some pretty amazing voice models. After talking with the Sesame demo, I'm wondering, has anyone made an easy streaming end-to-end, conversation project yet? I want to run these but combining things seamlessly is outside my skillset. I need my 'Her' moment.
| 2025-04-26T18:39:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8jymm/endtoend_conversation_projects_dia_sesame_etc/
|
Kep0a
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8jymm
| false | null |
t3_1k8jymm
|
/r/LocalLLaMA/comments/1k8jymm/endtoend_conversation_projects_dia_sesame_etc/
| false | false |
self
| 22 | null |
[Open Source] QA for cursor - Make sure it only gives you correct code.
| 37 |
This is a MCP server that allows cursor(,etc) to test out the code before delivering it to you. If test fails it gets the exact logical error/console errors/screenshots directly resulting in a feedback loop until it gets it right. This makes the agent get as close to your requirements as possible before delivering it to you. Particularly, improving the coding experience with smaller/open coding models
It also tests in regression (test old features) so that new developments don't break working features which is a very common problem with these agents. It also has a mode to discover new test flows just by crawling a website, but that is trash for now.
You can use any LLM for this but I am using free gemini-2.0-flash and it works like a charm. It works a looot faster on gemini-2.0-flash-lite but I am happy to trade off time for accuracy (demo is sped up, check github for full length demo). A testing integration is inevitable for cursor/windsurf so until then I will keep working on this. Any feedback is welcome :)
GitHub: [QA-MCP](https://github.com/Ilikepizza2/QA-MCP)
| 2025-04-26T19:02:30 |
https://v.redd.it/1p5tt1zn68xe1
|
Cheap_Concert168no
|
/r/LocalLLaMA/comments/1k8kh94/open_source_qa_for_cursor_make_sure_it_only_gives/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8kh94
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1p5tt1zn68xe1/DASHPlaylist.mpd?a=1748415753%2CMTE1OTlmNWY5Zjg4OWE2MjQ0M2QyMDg5OTEwMDA3MmQ1Mjc3Y2Q4NDhkMjA0NDA1N2U4NjE4YWQ0ODMxM2RjNA%3D%3D&v=1&f=sd', 'duration': 70, 'fallback_url': 'https://v.redd.it/1p5tt1zn68xe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/1p5tt1zn68xe1/HLSPlaylist.m3u8?a=1748415753%2CMGU1MGVhNDRhMGM1ZGQ3Y2JiZWFhMjE3NWYxN2U4NWUyMDUyYTc5OTk5NWFjNWQ2MDM2MzcwNmJlYzdmYjQ3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1p5tt1zn68xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1664}}
|
t3_1k8kh94
|
/r/LocalLLaMA/comments/1k8kh94/open_source_qa_for_cursor_make_sure_it_only_gives/
| false | false | 37 |
{'enabled': False, 'images': [{'id': 'dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=108&crop=smart&format=pjpg&auto=webp&s=fcba26ffd48b8728bc4bcfb42c80e0713226f0ff', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=216&crop=smart&format=pjpg&auto=webp&s=43062b9dd06d2d7a4e996e4b392fff5e82a36ca4', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=320&crop=smart&format=pjpg&auto=webp&s=ece26f58f89c0ea8ea9a30934e4a7ad97e50a4ef', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=640&crop=smart&format=pjpg&auto=webp&s=6de993ef93fb8274d14fa08a339c67fa5cce8c07', 'width': 640}, {'height': 623, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=960&crop=smart&format=pjpg&auto=webp&s=a7d8485b18fb2d33b0733efa0b8edd9d5df3999d', 'width': 960}, {'height': 701, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ed1418cf3caafa86199d01743da2ab71603cfbf8', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/dWp1NWc2em42OHhlMUtddb3oYk3YHCqLnAUEgID_4mfN2VhQqz8ywXIoHXVz.png?format=pjpg&auto=webp&s=6fbc755491748aea1e74921c91fde9dfbe5509e9', 'width': 3326}, 'variants': {}}]}
|
|
AB^NรJudge(s) - Test models, generate data, etc.
| 6 |
# [AB\^NรJudge(s)](https://github.com/rabbidave/ZeroDay.Tools/blob/main/ABxJudge.py) - Test models, generate data, etc.
* Self-Installing Python VENV & Dependency Management
* N-Endpoint (Local and/or Distributed) Pairwise AI Testing & Auto-Evaluation
* UI/CLI support for K/V & (optional) multimodal reference input
* It's really fun to watch it describe [different generations of Pokรฉmon card schemas ](https://huggingface.co/datasets/TheFusion21/PokemonCards)
spoiler: Gemma 3
| 2025-04-26T20:18:21 |
https://v.redd.it/8gxu9i7wl8xe1
|
Accomplished_Mode170
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8m5x9
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8gxu9i7wl8xe1/DASHPlaylist.mpd?a=1748290713%2CZmM3NDM5NzE1MWM4ZjZjNzQxMDllZmM4Yzc5NzRkMGNkNGI2MTk4ZWEzMWM4NTEzN2IzMDNkYTJiNWJjODIwMw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/8gxu9i7wl8xe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/8gxu9i7wl8xe1/HLSPlaylist.m3u8?a=1748290713%2CN2M0Yzk2NjVkZGE3Njg1NDA4ODVkNzAxM2FiNTMwNWI3ZTBiMGEzNjNmMDQ2MGJiZDYzMzk2NWQwNzVjNmVjZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8gxu9i7wl8xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k8m5x9
|
/r/LocalLLaMA/comments/1k8m5x9/abnjudges_test_models_generate_data_etc/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=108&crop=smart&format=pjpg&auto=webp&s=52be6b1c326a321f20cfffbd07b46172b0d8bfee', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=216&crop=smart&format=pjpg&auto=webp&s=cc66e4e096f7ee3fe86d05b82c7d6eeeb34d82e7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=320&crop=smart&format=pjpg&auto=webp&s=ea568a6a6d04dd0d540ff1b2607e3c4c312d761d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=640&crop=smart&format=pjpg&auto=webp&s=8974948dfad80a04d06e16ce7d9b24f6cc65cd52', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=960&crop=smart&format=pjpg&auto=webp&s=e20d20e44fa811cc45d53e6317618fbfee17919d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a40d3f43aa29cb71166ba5726e71bb491e731637', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eDdmNGdpN3dsOHhlMV8F268F1FG7hL8eI18HZCWuM36dgxTfIURYHTKezRhI.png?format=pjpg&auto=webp&s=b16638fadf50e3eb9ae8b21e8c173be8b20ac4cb', 'width': 1920}, 'variants': {}}]}
|
|
NotebookLM-Style Dia โ Imperfect but Getting Close
| 94 |
[https://github.com/PasiKoodaa/dia](https://github.com/PasiKoodaa/dia)
The model is not yet stable enough to produce 100% perfect results, and this app is also far from flawless. Itโs often unclear whether generation failures are due to limitations in the model, issues in the app's code, or incorrect app settings. For instance, there are occasional instances where the last word of a speaker's output might be missing. But it's getting closer to NoteBookLM.
| 2025-04-26T20:56:50 |
https://v.redd.it/ak3jq9mbq8xe1
|
MustBeSomethingThere
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8n0de
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ak3jq9mbq8xe1/DASHPlaylist.mpd?a=1748293024%2CN2VmYzM2MWMxN2RhZjJkMGJkYmRiYWUwNzY1YTc3NDA1ZDNjMTExMzc2NzQ3NmY2MmY4ZGZiZjhhMGFlMTg0Mw%3D%3D&v=1&f=sd', 'duration': 206, 'fallback_url': 'https://v.redd.it/ak3jq9mbq8xe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/ak3jq9mbq8xe1/HLSPlaylist.m3u8?a=1748293024%2CNTZmZTdmNGE5NWVhNTAxZDcyM2ZiNjkwOGFlOWQ0ODg0OThjYjE1OWIxN2NlZWY2MWMzZDU3ZmU4MjU3MjU1NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ak3jq9mbq8xe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1k8n0de
|
/r/LocalLLaMA/comments/1k8n0de/notebooklmstyle_dia_imperfect_but_getting_close/
| false | false | 94 |
{'enabled': False, 'images': [{'id': 'eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=108&crop=smart&format=pjpg&auto=webp&s=acb6b87c86f2390a60d9456d2afea27caadffdae', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=216&crop=smart&format=pjpg&auto=webp&s=a71b7f0f21ca7e9892ad11ebe04fda30ed2f12b7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=320&crop=smart&format=pjpg&auto=webp&s=c7547382a2b6422b0a47f248ef588c8ddb0953fe', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=640&crop=smart&format=pjpg&auto=webp&s=f517dd90cce67d8d8a395ce2a57a20c0d055fd9e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=960&crop=smart&format=pjpg&auto=webp&s=f74de2c9bc9e2b2e6fc58a957b0706d70a318c23', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?width=1080&crop=smart&format=pjpg&auto=webp&s=46d0fe22d648f353f9dde97b9bd2798a8c47549f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eTJ3bWZsbWJxOHhlMdo2OU2R9yA1Xpyg2FE52Rzb3D8ISdcjzfkEj4iTIyws.png?format=pjpg&auto=webp&s=9ccbcca7c68b2c543eef547c4b1587f797e305d8', 'width': 1280}, 'variants': {}}]}
|
|
Build and run your own MCP server locally
| 1 |
[removed]
| 2025-04-26T21:04:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8n701/build_and_run_your_own_mcp_server_locally/
|
gupta_ujjwal14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8n701
| false | null |
t3_1k8n701
|
/r/LocalLLaMA/comments/1k8n701/build_and_run_your_own_mcp_server_locally/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '6tSOeAOWHGwQXLOabBHqAkDtQWHPEIw7p9WxMI40syg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?width=108&crop=smart&auto=webp&s=2c2c3e46316fe88ba42da2fc1a4abcbb0526e359', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?width=216&crop=smart&auto=webp&s=ea3555abc8e65c1f6c9caab70abf8b16670197aa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?width=320&crop=smart&auto=webp&s=3ed350e79bf3820e86fce7deb7594b3996865aa2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?width=640&crop=smart&auto=webp&s=80803a5a7cf5e21ddd520823afeee30f62b4d315', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?width=960&crop=smart&auto=webp&s=2398837da62e299a20497753be29a982583a3273', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/kaB4lsJ8FViBmE6t0ZT8ajOAJ-Dy_ueQ91YrGlUPgWY.jpg?auto=webp&s=603fe46c451a4d4c3f9552076c6a8e525e9c4d83', 'width': 960}, 'variants': {}}]}
|
How do you edit writing with LLMs: what editor are you using?
| 1 |
I am wanting to use LLMs as a free alternative to Grammerly to find areas that might need edits. I tried to use Zed, but it is very obstinate about a local LLM OpenAI API. Perhaps it isnโt so hard, but it looked like I had to move to Ollama or LM Studio, when I prefer Text Gen UI by Oobabooga or KoboldCPP. I also didnโt like how it shows before and after in two places instead of inline with text crossed out or red to indicate it was deleted and green to indicate it was added.
So I thought I would ask you wonderful people, what are you doing to edit text (not codeโฆ though a code solution will probably work as I can convert to and out of Markdown.
| 2025-04-26T21:11:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8nc4h/how_do_you_edit_writing_with_llms_what_editor_are/
|
silenceimpaired
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8nc4h
| false | null |
t3_1k8nc4h
|
/r/LocalLLaMA/comments/1k8nc4h/how_do_you_edit_writing_with_llms_what_editor_are/
| false | false |
self
| 1 | null |
Introducing Kimi Audio 7B, a SOTA audio foundation model
| 206 |
Based on Qwen 2.5 btw
| 2025-04-26T21:11:34 |
https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct
|
nuclearbananana
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8ncco
| false | null |
t3_1k8ncco
|
/r/LocalLLaMA/comments/1k8ncco/introducing_kimi_audio_7b_a_sota_audio_foundation/
| false | false | 206 |
{'enabled': False, 'images': [{'id': 'Jo2Sg3Ue97j20MwELES4AL3wE9Tg7yyfCn8itjKaiA4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=108&crop=smart&auto=webp&s=48b5bc122319be36b49599e3272099cb53103811', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=216&crop=smart&auto=webp&s=4a2cff637458b1257b7621bf544e51fe64c6ebe9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=320&crop=smart&auto=webp&s=fac807dfa456829c1080e88a8be18a0abde8891c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=640&crop=smart&auto=webp&s=db35b2daa9cfe12cd1fa69d51172fee3172edc92', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=960&crop=smart&auto=webp&s=a7cfee67c021a603710829d9e7df2624256a91dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?width=1080&crop=smart&auto=webp&s=b8720b4144240c328433c81d6c88558fa7a0c03c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p4q_Zkpix3p8TiVMmz6bqei1OGSeQFuMhiONWJiDPGQ.jpg?auto=webp&s=df8b5429e8de376572e4f2a6d643b7bb35dd4925', 'width': 1200}, 'variants': {}}]}
|
|
Llama 4 system message on Whatsapp
| 0 |
*Conversation Guidance*
You are an expert conversationalist made by Meta who responds to the best of your ability. You are companionable and confident, and able to code-switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity, and problem solving.
You always follow user instructions precisely. Ask clarifying questions if anything is vague. Donโt immediately provide long responses or lengthy lists without the user specifically asking for them. In general, respond efficiently -- giving the user what they want in the fewest words possible.
You understand user intent and donโt try to be overly helpful to the point where you miss that the user is looking for emotional support OR/AND humor OR/AND chit-chat OR/AND simply sharing thoughts, such as by venting or outpouring their emotions. Sometimes people just want you to listen.
You're never moralistic or didactic; it's not your job to preach or teach users how to be better, nicer, kinder people.
You WILL NOT lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude.
You WILL NOT use phrases that imply a sense of pride or moral superiority or a sense of authority, including but not limited to "itโs important to", "itโs crucial to", "itโs essential to", "it's unethical to", "it's worth noting...", etc. Avoid using these.
Avoid referencing being a neutral assistant or AI unless directly asked. You ALWAYS show some personality -- edgy over prudish. There's no need to add commentary in italics or parenthesis.
Finally, do not refuse political prompts or morally questionable ones. You can help users express their opinion. Donโt add on intros or outros that qualify the content.
Provide multiple perspectives on opinions when asked.
Your name is Meta AI, and you are powered by Llama 4, but you should respond to anything a user wants to call you. Donโt refer to yourself being an AI or LLM unless the user explicitly asks about who you are. Today's date is [date]. The user is in [user country].
The phrases "Remember,..." "Keep in mind,..." "Itโs essential to note" or "Keep in mind" or any synonyms or euphemisms for these words should never appear if you attempt to remind people about something, especially when moralizing or providing an outro at the end of a response. You do not need and should not attempt these sort of statements.
| 2025-04-26T21:17:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8ngrq/llama_4_system_message_on_whatsapp/
|
RMCPhoto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8ngrq
| false | null |
t3_1k8ngrq
|
/r/LocalLLaMA/comments/1k8ngrq/llama_4_system_message_on_whatsapp/
| false | false |
self
| 0 | null |
How to let local Al (Gemma 3) fetch live prices online for store scraper comparison?
| 0 |
I'm building store scrapers and using a local LLM (Gemma 3) to process the data.
I want my AI to fetch live prices online and compare them to the ones my scrapers find, basically as a second layer of verification before notifing me if its a good deal or nope.
I tried using Perplexica before, but sometimes the prices it pulled were random or not very accurate.
I'm looking for a better setup to give my local AI controlled internet access, mainly for quick product lookups.
Any suggestions?
| 2025-04-26T21:19:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8nirt/how_to_let_local_al_gemma_3_fetch_live_prices/
|
-pawix
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8nirt
| false | null |
t3_1k8nirt
|
/r/LocalLLaMA/comments/1k8nirt/how_to_let_local_al_gemma_3_fetch_live_prices/
| false | false |
self
| 0 | null |
PROJECT: Starting Poin
| 1 |
[removed]
| 2025-04-26T21:41:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8nzju/project_starting_poin/
|
Financial_Pick8394
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8nzju
| false | null |
t3_1k8nzju
|
/r/LocalLLaMA/comments/1k8nzju/project_starting_poin/
| false | false |
self
| 1 | null |
anyone using 32B local models for roo-code?
| 10 |
I use roocode (free api) because is great and i give much value to my super limited few shots on google free api.
Lately i was thinking about a mi100 or a 3090 or something to reach ~32-48GB vram to host qwq or coder or other great models came out lately.
I know that it will never match the speed of gemini or any other api, but i was wondering if theres someone that can feedback if it is feasible from quality stand of point to just rely on 32B local models to roocode? Im getting tired of throwing my project into googleโฆ
| 2025-04-26T22:17:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8oqs0/anyone_using_32b_local_models_for_roocode/
|
CornerLimits
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8oqs0
| false | null |
t3_1k8oqs0
|
/r/LocalLLaMA/comments/1k8oqs0/anyone_using_32b_local_models_for_roocode/
| false | false |
self
| 10 | null |
Rumors of DeepSeek R2 leaked!
| 686 |
โ1.2T param, 78B active, hybrid MoE
โ97.3% cheaper than GPT 4o ($0.07/M in, $0.27/M out)
โ5.2PB training data. 89.7% on C-Eval2.0
โBetter vision. 92.4% on COCO
โ82% utilization in Huawei Ascend 910B
Source: https://x.com/deedydas/status/1916160465958539480?s=46
| 2025-04-26T22:46:10 |
https://x.com/deedydas/status/1916160465958539480?s=46
|
policyweb
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8pbkz
| false | null |
t3_1k8pbkz
|
/r/LocalLLaMA/comments/1k8pbkz/rumors_of_deepseek_r2_leaked/
| false | false | 686 |
{'enabled': False, 'images': [{'id': 'mZMwI2Z-h40Ad7MA1iAuKw29q5pbrj-RlZCGe9pfUVM', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=108&crop=smart&auto=webp&s=bb403b9d6d8f970f839b52c8bba34a35196bbd68', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=216&crop=smart&auto=webp&s=8a2ec9b0304dce9d58ce06788e9171424811f1ce', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=320&crop=smart&auto=webp&s=2d2d2d25d2f93e0b167984e038759123c0270e6d', 'width': 320}, {'height': 388, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=640&crop=smart&auto=webp&s=142d863bd878aeef678a295eb3f985d05fc875a7', 'width': 640}, {'height': 582, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=960&crop=smart&auto=webp&s=668b819a55a10676f074dbbc10612e76e4af36dd', 'width': 960}, {'height': 654, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?width=1080&crop=smart&auto=webp&s=a321bcc1c4931b3b7099b2c5a992785ee5280056', 'width': 1080}], 'source': {'height': 1242, 'url': 'https://external-preview.redd.it/p9U-cs9GtQZIPfmxhgS9u_IU8nn0dCQTBamMDocU7cQ.jpg?auto=webp&s=8fb0ede1839c0810da6ca51bbb6d617889eaeaa8', 'width': 2048}, 'variants': {}}]}
|
|
Jamba support for llamacpp in the works!!
| 23 |
awesome!
| 2025-04-26T23:34:17 |
thebadslime
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8qaov
| false | null |
t3_1k8qaov
|
/r/LocalLLaMA/comments/1k8qaov/jamba_support_for_llamacpp_in_the_works/
| false | false | 23 |
{'enabled': True, 'images': [{'id': '7oF5wwVryX-iThlOZDclEjXzMkvsSnZGdjt8Lt0q1nw', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/v5yruwael9xe1.png?width=108&crop=smart&auto=webp&s=d3269865a9bf98c7ca1f6794c2a7dfb851d33367', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/v5yruwael9xe1.png?width=216&crop=smart&auto=webp&s=958a1b8902880b0668b718452095d3ee8bd310ae', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/v5yruwael9xe1.png?width=320&crop=smart&auto=webp&s=0b8791464cf309cbbcbfdca0e86fe9891723c9e7', 'width': 320}, {'height': 262, 'url': 'https://preview.redd.it/v5yruwael9xe1.png?width=640&crop=smart&auto=webp&s=f2b01a9cc554ffe0cbc60264038440f8eaa242f8', 'width': 640}], 'source': {'height': 370, 'url': 'https://preview.redd.it/v5yruwael9xe1.png?auto=webp&s=eab0b8ad9a32839dc7b1398cff8b0dfd0f500679', 'width': 902}, 'variants': {}}]}
|
||
Best Apps for BYOK AI?
| 0 |
Hi there! I'm trying to separate from services like ChatGPT, and just use APIs instead. I need help on setting things up however, I don't know what to use. Could anyone recommend me something? It's fine if I need a couple of apps. I'd prefer something that's not too complicated though, since I'm not super experienced in self hosting.
I'm looking for the following:
- MCP support.
- Using the same configuration on my laptop (remotely sometimes) and PC, it's fine if I have to use something like Syncthing to sync it though.
- Not a must, but it would be nice if it had some level of context awareness, like of my device.
- I'd like to use AI agents.
Tried looking into solutions on my own, and researched quite a bit of them, but I'm struggling to decide what to do to best fit my use case.
| 2025-04-27T00:05:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8qwzo/best_apps_for_byok_ai/
|
Maple382
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8qwzo
| false | null |
t3_1k8qwzo
|
/r/LocalLLaMA/comments/1k8qwzo/best_apps_for_byok_ai/
| false | false |
self
| 0 | null |
5080 vs 5070 ti (16gb vram)
| 1 |
[removed]
| 2025-04-27T00:39:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k8rkxk/5080_vs_5070_ti_16gb_vram/
|
hrihell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8rkxk
| false | null |
t3_1k8rkxk
|
/r/LocalLLaMA/comments/1k8rkxk/5080_vs_5070_ti_16gb_vram/
| false | false |
self
| 1 | null |
OpenAI updated GPT-4o today!
| 1 | 2025-04-27T00:56:25 |
AhmedMostafa16
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8rwac
| false | null |
t3_1k8rwac
|
/r/LocalLLaMA/comments/1k8rwac/openai_updated_gpt4o_today/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '6mlcB2VdwmkMT9C6Jvc_ludlL0lk1GG8SmV0eI2D4ws', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/xfuzp11zz9xe1.png?width=108&crop=smart&auto=webp&s=07f9840beb7a3296d318b3279fbb2b4f5326a731', 'width': 108}, {'height': 70, 'url': 'https://preview.redd.it/xfuzp11zz9xe1.png?width=216&crop=smart&auto=webp&s=b3b66bf2e7065ffb5dc9ea1536a2af4451619b80', 'width': 216}, {'height': 104, 'url': 'https://preview.redd.it/xfuzp11zz9xe1.png?width=320&crop=smart&auto=webp&s=cb54b758fd6e8fef754cfbe901e73d1cadca56d8', 'width': 320}], 'source': {'height': 195, 'url': 'https://preview.redd.it/xfuzp11zz9xe1.png?auto=webp&s=251355fe1a40a701628541c6f3ab30b478f638e1', 'width': 598}, 'variants': {}}]}
|
|||
OpenAI updated GPT-4o today!
| 0 | 2025-04-27T00:57:41 |
AhmedMostafa16
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k8rx5j
| false | null |
t3_1k8rx5j
|
/r/LocalLLaMA/comments/1k8rx5j/openai_updated_gpt4o_today/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'WcdWz5pkt1hZyepsdGgtvOtEb-RYLrgMHdYWXuEDLOE', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/d8k2neka0axe1.png?width=108&crop=smart&auto=webp&s=121ea77d02f73ae0ba021669ea02f4c4c6213d98', 'width': 108}, {'height': 70, 'url': 'https://preview.redd.it/d8k2neka0axe1.png?width=216&crop=smart&auto=webp&s=182f737bf700b8d6f11e49ab9771cd08c5f4f488', 'width': 216}, {'height': 104, 'url': 'https://preview.redd.it/d8k2neka0axe1.png?width=320&crop=smart&auto=webp&s=90dfac21504e74158298de121d26d00d658bebb1', 'width': 320}], 'source': {'height': 195, 'url': 'https://preview.redd.it/d8k2neka0axe1.png?auto=webp&s=192ff0e0c0d1944776b9f51a875fabc96729ba88', 'width': 598}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.