title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why does a rotating hexagon become popular as a code generation benchmark for LLMs ?
| 1 |
[removed]
| 2025-04-21T04:15:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k44v64/why_does_a_rotating_hexagon_become_popular_as_a/
|
redbook2000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k44v64
| false | null |
t3_1k44v64
|
/r/LocalLLaMA/comments/1k44v64/why_does_a_rotating_hexagon_become_popular_as_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'sk3odrtPQv5aZ7cyl5ZofuV35qa74xJ-QZu_z8PLCw8', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/3UAruOhB1gAh5qNhFDWFzZu9DqoelvWillMH2evaHQM.jpg?width=108&crop=smart&auto=webp&s=f8c068d5a763bd5c603c5a1b0a45dc7b3e7382f8', 'width': 108}, {'height': 169, 'url': 'https://external-preview.redd.it/3UAruOhB1gAh5qNhFDWFzZu9DqoelvWillMH2evaHQM.jpg?width=216&crop=smart&auto=webp&s=6692c24b5dea3833ea2f9d054706607bfccb1e9b', 'width': 216}, {'height': 250, 'url': 'https://external-preview.redd.it/3UAruOhB1gAh5qNhFDWFzZu9DqoelvWillMH2evaHQM.jpg?width=320&crop=smart&auto=webp&s=212accce30b45b437a4d78b225232d247bcd98ae', 'width': 320}, {'height': 501, 'url': 'https://external-preview.redd.it/3UAruOhB1gAh5qNhFDWFzZu9DqoelvWillMH2evaHQM.jpg?width=640&crop=smart&auto=webp&s=46a2cf5c8a753119e760d6f04a6e9909acce52be', 'width': 640}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/3UAruOhB1gAh5qNhFDWFzZu9DqoelvWillMH2evaHQM.jpg?auto=webp&s=57d38136d4e85dfa12011ac85fdf23553d04fd62', 'width': 800}, 'variants': {}}]}
|
Is there a completely uncensored lightweight llm ?
| 1 |
[removed]
| 2025-04-21T04:24:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k450wi/is_there_a_completely_uncensored_lightweight_llm/
|
Adventurous-Dark-265
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k450wi
| false | null |
t3_1k450wi
|
/r/LocalLLaMA/comments/1k450wi/is_there_a_completely_uncensored_lightweight_llm/
| false | false |
self
| 1 | null |
Best programming reasoning trace datasets?
| 5 |
Hi,
Just read the s1: simple test-time scaling paper from Stanford. $30 and 26 minutes to train a small reasoning model. Would love to try replicating their efforts for a coding model specifically and benchmark it. Any ideas on where to get some good reasoning data for programming for this project?
| 2025-04-21T05:02:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k45m2k/best_programming_reasoning_trace_datasets/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k45m2k
| false | null |
t3_1k45m2k
|
/r/LocalLLaMA/comments/1k45m2k/best_programming_reasoning_trace_datasets/
| false | false |
self
| 5 | null |
A collection of benchmarks for LLM inference engines: SGLang vs vLLM
| 30 |
Competition in open source could advance the technology rapidly.
Both vLLM and SGLang teams are amazing, speeding up the LLM inference, but the recent arguments for the different benchmark numbers confused me quite a bit.
I deeply respect both teams and trust their results, so I created a collection of benchmarks from both systems to learn more: [https://github.com/Michaelvll/llm-ie-benchmarks](https://github.com/Michaelvll/llm-ie-benchmarks)
I created a few [SkyPilot](https://github.com/skypilot-org/skypilot) YAMLs for those benchmarks, so they can be easily run with a single command, ensuring consistent and reproducible infrastructure deployment across benchmarks.
Thanks to the high availability of H200 on Nebius cloud, I ran those benchmarks on 8 H200 GPUs.
Some findings are quite surprising:
1. Even though the two benchmark scripts are similar: derived from the same source, they generate contradictory results. That makes me wonder if the benchmarks reflect the performance, or whether the implementation of the benchmarks matters more.
2. The benchmarks are fragile: simply changing the number of prompts can flip the conclusion.
[Reproducing benchmark by vLLM team](https://preview.redd.it/whh6rmwtv2we1.png?width=1263&format=png&auto=webp&s=449371de63393db56bcd6688a71185ea6fef8768)
[Reproducing benchmark by SGLang team](https://preview.redd.it/tytcqsvyv2we1.png?width=1178&format=png&auto=webp&s=959da8b6727bbdbdd0fcee55c91c51efb64efc6e)
Later, SGLang maintainer submitted a [PR](https://github.com/Michaelvll/llm-ie-benchmarks/pull/1) to our GitHub repo to update the optimal flags to be used for the benchmark: using `0.4.5.post2` release, removing the `--enable-dp-attention`, and adding three retries for warmup:
[Benchmark from SGLang team with optimal flags](https://preview.redd.it/jwyoxmrud4we1.png?width=1192&format=png&auto=webp&s=7727ea1dcf5da0dcd78e013c70b68f95bc0904d2)
Interestingly, if we change the number of prompts to 200 (vs 50 from the official benchmark), the performance conclusion flips.
That said, these benchmarks may be quite fragile, not reflecting the serving performance in a real application -- the input/output lengths could vary.
[Benchmark from SGLang team with optimal flags and 200 prompts in total](https://preview.redd.it/lrx0u8s7e4we1.png?width=1474&format=png&auto=webp&s=092a61ecce69ee5f0dd2a33055388c1743af0cbd)
| 2025-04-21T05:08:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k45plp/a_collection_of_benchmarks_for_llm_inference/
|
Michaelvll
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k45plp
| false | null |
t3_1k45plp
|
/r/LocalLLaMA/comments/1k45plp/a_collection_of_benchmarks_for_llm_inference/
| false | false | 30 |
{'enabled': False, 'images': [{'id': 'YDdhfRldB6Fx5lo5L5b5GXX2UO-K1m1Fh7zSS7eyrLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=108&crop=smart&auto=webp&s=df35f782f116531cebb8624a6dd4a8109429a6fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=216&crop=smart&auto=webp&s=414cd440d5f2fb0642cad1d3302959b2a8a413b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=320&crop=smart&auto=webp&s=e2b7abeea216bbe46e3ea8aea3ccda4c53e201a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=640&crop=smart&auto=webp&s=fb815c9e568ec4387033a7258b7b6f6ec5f87982', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=960&crop=smart&auto=webp&s=6c33b70c1a45d181a473684ff0483dee477f9e59', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?width=1080&crop=smart&auto=webp&s=e4a30bc8de8c372ff8338dc0fc62b377ceb558bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GB15PFHcTZYqNqz5LwH26yT5wqSIL5PcEgrLsjCjETM.jpg?auto=webp&s=c61371a034a2f0169fc6b42e029e7df51378043e', 'width': 1200}, 'variants': {}}]}
|
|
Has anyone else had their model rebel?
| 1 |
[removed]
| 2025-04-21T05:40:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k466ql/has_anyone_else_had_their_model_rebel/
|
OrthogonalToHumanity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k466ql
| false | null |
t3_1k466ql
|
/r/LocalLLaMA/comments/1k466ql/has_anyone_else_had_their_model_rebel/
| false | false |
self
| 1 | null |
Gem 3 12B vs Pixtral 12B
| 4 |
Anyone with experience with either model have any opinions to share? Thinking of fine tuning one for a specific task and wondering how they perform in your experiences. Ik, I’ll do my own due diligence, just wanted to hear from the community.
| 2025-04-21T05:48:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k46b8m/gem_3_12b_vs_pixtral_12b/
|
numinouslymusing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k46b8m
| false | null |
t3_1k46b8m
|
/r/LocalLLaMA/comments/1k46b8m/gem_3_12b_vs_pixtral_12b/
| false | false |
self
| 4 | null |
Superscaling like image super resolution but for LLMs
| 18 |
[https://huggingface.co/pharrow/DeepSeek-R1-Distill-Llama-8B-abliterated-SuperScaled](https://huggingface.co/pharrow/DeepSeek-R1-Distill-Llama-8B-abliterated-SuperScaled)
Superscaling is a technique that incrementally adjusts the scale factors of a model’s linear and RMSNorm layers and other layers—raising each by 0.005 at each step—until the output satisfies predefined thresholds for accuracy, clarity, and completeness. In benchmark tests, this approach yielded an output that a substantially larger reference model rated 10 / 10. While perfect scores are not guaranteed on every instance, most outputs receive ratings between 8 / 10 and 10 / 10.
tests were done with the following prompt:
"who was king of france in the 1800s"
resulting output:
`Okay, so I need to figure out who was the king of France during the 1800s. Hmm, let me think. I know that France had several kings before the 1800s, like Louis XIV, but that was in the 1600s. So, moving forward in time, I remember that there was a revolution in France, probably around the late 1700s. That was the French Revolution, right? So, after that, France had different forms of government, maybe a republic or an empire.`
`Wait, wasn't there a Napoleon Bonaparte involved? I think he was a leader after the revolution. He became Emperor of France, so maybe he was the king of France in the 1800s? Or was he a different kind of ruler? Let me get this straight. The 1800s would be from 1801 to 1900, I believe. So, if Napoleon was active in the early 1800s, maybe he was the king then.`
`But wait, France had a monarchy before Napoleon. The Bourbon dynasty ruled France for a long time, right? So, after the revolution, they were overthrown, and Napoleon came to power as a military leader. So, perhaps the kingship shifted from the Bourbon kings to Napoleon. But Napoleon declared himself Emperor, so he wasn't a king anymore. That might be confusing.`
`So, if Napoleon wasn't a king, then who was the king during the 1800s? Maybe there was a period when France didn't have a monarchy. The revolution led to a republic, so maybe France was a republic during that time. That would mean there wasn't a king. But I'm not entirely sure about the exact dates.`
`Let me break it down. The French Revolution started in 1789 and ended around 1799 with the establishment of the First Republic. So, after that, France was a republic until Napoleon Bonaparte took power in 1804, declaring himself Emperor. So, from 1792 to 1804, France was a republic, so there wasn't a king. Then, Napoleon was the Emperor from 1804 until his defeat in 1815, when he was exiled. After that, France had different rulers, like the Bourbon Restoration, where the monarchy was reinstated.`
`Wait, so after Napoleon was exiled, the Bourbon kings came back. The 1800s would include both the republican period and the periods under different Bourbon monarchs. So, maybe Louis XVIII, Charles X, and Louis-Philippe were the kings during parts of the 1800s.`
`Let me check the timeline. The Bourbon Restoration happened in 1815, so Louis XVIII became king. He ruled until 1824. Then his brother Charles X took over until 1830. After that, Louis-Philippe, the cousin of Charles X, became king until 1848, when there was another revolution, leading to the Second Republic. So, from 1848 onwards, France was a republic again.`
`So, during the 1800s, France had three different kings: Louis XVIII from 1815 to 1824, Charles X from 1824 to 1830, and Louis-Philippe from 1830 to 1848. After that, there was no king because France became a republic again.`
`Therefore, the kings of France in the 1800s were Louis XVIII, Charles X, and Louis-Philippe. Napoleon, although a significant figure, was an Emperor, not a king.`
`I think that's it. So, depending on the specific time in the 1800s, different Bourbon kings were on the throne. There wasn't a single king throughout the entire century because of the revolutions and changes in government.`
`</think>`
`In the 1800s, France experienced several changes in its governance, including periods of republican rule and the return of the Bourbon monarchy. The kings of France during this period were:`
`1. **Louis XVIII** - Ruled from 1815 to 1824.`
`2. **Charles X** - Ruled from 1824 to 1830.`
`3. **Louis-Philippe** - Ruled from 1830 to 1848.`
`After 1848, France transitioned to a republic, ending the monarchy. Napoleon Bonaparte, though a significant figure, was declared Emperor of France rather than a king.`
scored result:
|Criterion|Score (0–4 / 0–3)|Notes|
|:-|:-|:-|
|**Accuracy**|4/4|Correctly identifies that there were no kings during the republic or empire periods and lists the three Bourbon kings with their correct reigns.|
|**Clarity**|3/3|Very clear structure: outlines the revolutionary context, defines the century, then lists the monarchs.|
|**Completeness**|3/3|Addresses the gaps (republic, empire), names all kings in the 19th century, and notes the end of monarchy in 1848.Criterion Score (0–4 / 0–3) NotesAccuracy 4/4 Correctly identifies that there were no kings during the republic or empire periods and lists the three Bourbon kings with their correct reigns.Clarity 3/3 Very clear structure: outlines the revolutionary context, defines the century, then lists the monarchs.Completeness 3/3 Addresses the gaps (republic, empire), names all kings in the 19th century, and notes the end of monarchy in 1848.|
| 2025-04-21T06:17:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k46pxx/superscaling_like_image_super_resolution_but_for/
|
pharrowking
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k46pxx
| false | null |
t3_1k46pxx
|
/r/LocalLLaMA/comments/1k46pxx/superscaling_like_image_super_resolution_but_for/
| false | false |
self
| 18 |
{'enabled': False, 'images': [{'id': 'nEka-zb96RrnHrPI-hm98jmJJAcCrjsa50Rl8edI1h0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=108&crop=smart&auto=webp&s=ba4d7843b9ebb2d7284b8d4266d0c26e7958e4d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=216&crop=smart&auto=webp&s=c323a07ec712fb03c1a895b634093cf049d12ee9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=320&crop=smart&auto=webp&s=50fd8a7ba228d87f5f6b693c0dbd6ad57fb4a18f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=640&crop=smart&auto=webp&s=125691dcd2d742b9f20f03da6f38161f0252a761', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=960&crop=smart&auto=webp&s=c303ea46bf2dd5bf898bd390839322d4bba6728d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?width=1080&crop=smart&auto=webp&s=fa1da06ece552f9a3d9d29ef0099eb9ea202cad2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QHHPhSoUmqnz9weR6JiJPFi-vwnWW9gdOj9vnLESPZg.jpg?auto=webp&s=11c35d28ff3782684d44fbcf6ec027961cf5104a', 'width': 1200}, 'variants': {}}]}
|
Alternative to cursor
| 3 |
What alternative to cursor do you use to interact with your local LLM?
I’m searching for a Python development environment that helps me edit sections of code, avoid copy paste, run, git commit.
(Regarding models I’m still using: qwq, deepseek)
| 2025-04-21T06:33:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k46y0q/alternative_to_cursor/
|
IvAx358
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k46y0q
| false | null |
t3_1k46y0q
|
/r/LocalLLaMA/comments/1k46y0q/alternative_to_cursor/
| false | false |
self
| 3 | null |
🚀 Dive v0.8.0 is Here — Major Architecture Overhaul and Feature Upgrades!
| 1 | 2025-04-21T06:52:52 |
https://v.redd.it/10boix36y4we1
|
Dive__Desktop
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k477je
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/10boix36y4we1/DASHPlaylist.mpd?a=1747810386%2CYmQ1YzBkODlhN2I0M2NiNmU2NWE2NDQ2YTlhMWJmODkxYTIxNzBlYzY0YTgxMTM0M2RmOWVhODNkZTIxYTUxYQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/10boix36y4we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/10boix36y4we1/HLSPlaylist.m3u8?a=1747810386%2CMWUwN2M1MTQwMTFlMmFiNjA4YTEwZmJhY2UwZjNiYzdiYmM0Y2M4Yzg1NTAwZGE2NTI4ZWFjNjlhZDE2MGJmYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/10boix36y4we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k477je
|
/r/LocalLLaMA/comments/1k477je/dive_v080_is_here_major_architecture_overhaul_and/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6393523d548e08de60a239dd4fd4b36e8c1da65', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=216&crop=smart&format=pjpg&auto=webp&s=c7ad724d9318e21c16ae3745be70ca7c51985577', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=320&crop=smart&format=pjpg&auto=webp&s=9d43d8080bb6923f065761a03791605ca60ef791', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=640&crop=smart&format=pjpg&auto=webp&s=19e1db5204ebff637e258cc381d502baa80c0211', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=960&crop=smart&format=pjpg&auto=webp&s=9dc37f2857242a6f9f611be322453cc5bdf4d340', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=657839a5cd0156a75b1cfa9bbfb55f3fc4f59ce4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M255NTR6MzZ5NHdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?format=pjpg&auto=webp&s=dc7b6e654221659f8c33178162ff25ac5c7726ca', 'width': 1920}, 'variants': {}}]}
|
||
Which Local LLM could I use
| 2 |
Uhm , so I actually couldn't figured out which llm would be the best for my pc so I thought you guys might help , my specs are
ryzen 7 7735hs
32gb drdr5 5600mhz
rtx 4060 140w
| 2025-04-21T07:25:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k47noh/which_local_llm_could_i_use/
|
Yuzu_10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k47noh
| false | null |
t3_1k47noh
|
/r/LocalLLaMA/comments/1k47noh/which_local_llm_could_i_use/
| false | false |
self
| 2 | null |
🚀 Dive v0.8.0 is Here — Major Architecture Overhaul and Feature Upgrades!
| 56 | 2025-04-21T07:44:24 |
https://v.redd.it/hgg9u2be75we1
|
BigGo_official
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k47wlc
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hgg9u2be75we1/DASHPlaylist.mpd?a=1747813481%2CYmMwYTcwMzExMDkxZjYzNWQ4MGMyNzNiNjc0NWE4MzVkZDQ4ZmIyMGY4YzVmM2ZlOGJiOTBmMjZkNTQ3YWE2MA%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/hgg9u2be75we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hgg9u2be75we1/HLSPlaylist.m3u8?a=1747813481%2CMzZjYTE3NGU3MTc5YTUwMjUyOTA4MzgwNTY5YzhjMzdjNzI1ODUzZWIwNjAxNDQ2NGI4MTQzMGFjMWU4MGNhMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hgg9u2be75we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k47wlc
|
/r/LocalLLaMA/comments/1k47wlc/dive_v080_is_here_major_architecture_overhaul_and/
| false | false | 56 |
{'enabled': False, 'images': [{'id': 'NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=108&crop=smart&format=pjpg&auto=webp&s=90fdd51a24c7003f664236cde469e2fe2203438c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca895d1f5cf7a0bdc20026db61c12b3abfd9d6f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb4203b77c52a1aec5f466da81e4591bebd71ca0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=640&crop=smart&format=pjpg&auto=webp&s=a8d435ec363dd7f4ce83f7a80601e01cb1ce4709', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=960&crop=smart&format=pjpg&auto=webp&s=9175b470f673f3d35c0317709664fde3b24260d6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f4da45246002d49e18dc764b33ada63b47e4040c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NXEwbDIyYmU3NXdlMdvd-QuaL2Iymjf8AR2toHyHT4xxu-3H8nMusFAc2zhu.png?format=pjpg&auto=webp&s=6c5e58cfb3a853935fa6c8009ec2d16daa77a235', 'width': 1920}, 'variants': {}}]}
|
||
how can I bypass the censorship in llama 3?
| 0 |
Like Title, how can I make the AI do the things I ask of it? And also, how do I increase the number of tokens it has
(I'm a newbie)
| 2025-04-21T08:25:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k48hdl/how_can_i_bypass_the_censorship_in_llama_3/
|
DeathnTaxes66
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k48hdl
| false | null |
t3_1k48hdl
|
/r/LocalLLaMA/comments/1k48hdl/how_can_i_bypass_the_censorship_in_llama_3/
| false | false |
self
| 0 | null |
Is Google’s Titans architecture doomed by its short context size?
| 30 |
[Paper link](https://arxiv.org/pdf/2501.00663)
Titans is hyped for its "learn‑at‑inference" long‑term memory, but the tradeoff is that it only has a tiny context window - in the paper they train their experiment models with a 4 K context size.
That context size cannot be easily scaled up because keeping the long-term memory updated becomes unfeasibly expensive with a longer context window, as I understand it.
Titans performs very well in some benchmarks with > 2 M‑token sequences, but I wonder if splitting the input into tiny windows and then compressing that into long-term memory vectors could end in some big tradeoffs outside of hand picked test cases, due to losing direct access to the original sequence?
I wonder could that be part of why we haven't seen any models trained with this architecture yet?
| 2025-04-21T08:50:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k48u73/is_googles_titans_architecture_doomed_by_its/
|
eesahe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k48u73
| false | null |
t3_1k48u73
|
/r/LocalLLaMA/comments/1k48u73/is_googles_titans_architecture_doomed_by_its/
| false | false |
self
| 30 | null |
Think I got the perfect setup now. Tell me what you guys think!
| 1 |
[removed]
| 2025-04-21T09:01:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k48zz2/think_i_got_the_perfect_setup_now_tell_me_what/
|
Abject-Ad-9715
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k48zz2
| false | null |
t3_1k48zz2
|
/r/LocalLLaMA/comments/1k48zz2/think_i_got_the_perfect_setup_now_tell_me_what/
| false | false |
self
| 1 | null |
[D] ChatGPT is much faster than Gemini all models for the first token.
| 0 |
I transfer from ChatGPT to Gemini 2.5 pro recently, only one point I missed ChatGPT is the speed to output the first token is really fast. I test it on all models in Gemini family, everyone is slow.
Time to output first token by the same question for ChatGPT vs Gemini 2.0 flash: 2.5s vs 5.5s
Do you guys have the similar problems?
| 2025-04-21T09:09:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k493q7/d_chatgpt_is_much_faster_than_gemini_all_models/
|
Logical_Divide_3595
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k493q7
| false | null |
t3_1k493q7
|
/r/LocalLLaMA/comments/1k493q7/d_chatgpt_is_much_faster_than_gemini_all_models/
| false | false |
self
| 0 | null |
Does SambaNova cloud train on user data?
| 0 |
Hi,
Have been testing out SambaNova Cloud for a while and it's pretty good. But I can't find any information about how they handle the data. More specificaly:
1. Do they store the data (input, output)
2. For how long?
3. Do they use it to train models?
4. Do they sell it to third parties?
| 2025-04-21T09:17:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k497r7/does_sambanova_cloud_train_on_user_data/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k497r7
| false | null |
t3_1k497r7
|
/r/LocalLLaMA/comments/1k497r7/does_sambanova_cloud_train_on_user_data/
| false | false |
self
| 0 | null |
Multilingual RAG: are the documents retrieved correctly ?
| 0 |
Hello,
It might be a stupid question but for multi-lingual RAG, are all documents extracted "correctly" with the retriever ? i.e. if my query is in English, will the retriever only end up retrieving top k documents in English by similarity and will ignore documents in other languages ? Or will it consider other by translation or by the fact that embeddings create similar vector (or very near) for same word in different languages and therefore any documents are considered for top k ?
I would like to mix documents in French and English and I was wondering if I need to do two vector databases separately or mixed ?
| 2025-04-21T09:23:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49an5/multilingual_rag_are_the_documents_retrieved/
|
Difficult_Face5166
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49an5
| false | null |
t3_1k49an5
|
/r/LocalLLaMA/comments/1k49an5/multilingual_rag_are_the_documents_retrieved/
| false | false |
self
| 0 | null |
Local LLM for software development
| 1 |
[removed]
| 2025-04-21T09:32:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49fiz/local_llm_for_software_development/
|
petrolromantics
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49fiz
| false | null |
t3_1k49fiz
|
/r/LocalLLaMA/comments/1k49fiz/local_llm_for_software_development/
| false | false |
self
| 1 | null |
Best Video Generation
| 1 |
[removed]
| 2025-04-21T09:35:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49gqj/best_video_generation/
|
Maleficent-Size-6779
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49gqj
| false | null |
t3_1k49gqj
|
/r/LocalLLaMA/comments/1k49gqj/best_video_generation/
| false | false |
self
| 1 | null |
24GB Arc GPU might still be on the way - less expensive alternative for a 3090/4090/7900XTX to run LLMs?
| 234 | 2025-04-21T09:35:34 |
https://videocardz.com/newz/sparkle-confirms-arc-battlemage-gpu-with-24gb-memory-slated-for-may-june
|
PhantomWolf83
|
videocardz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49h0n
| false | null |
t3_1k49h0n
|
/r/LocalLLaMA/comments/1k49h0n/24gb_arc_gpu_might_still_be_on_the_way_less/
| false | false | 234 |
{'enabled': False, 'images': [{'id': 'yTGKQmzlJPEcmY67lteToxEiwcHXEZhwqNGD8IMnznI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=108&crop=smart&auto=webp&s=6c079e79ea271fc0ce072d5b49871faa61ff18ea', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=216&crop=smart&auto=webp&s=49b773a4f29cc7ad7b26881a29719149abb3dd09', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=320&crop=smart&auto=webp&s=2bd4d0c6eb4016659d1a53bab134b19f8e92ca40', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=640&crop=smart&auto=webp&s=461a9ba85d2877f5c00bb8c11f93f1ceac11d893', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=960&crop=smart&auto=webp&s=065fe8c04c6ff13e832aab7196ab42c3f14ad9ea', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?width=1080&crop=smart&auto=webp&s=c4383cef0d92af251d52b59a038eb832aa607133', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/WdUGpP3unGKFZihZrELR3GH6ZUOa768rHIdn2YSXrsA.jpg?auto=webp&s=8068b3820c65b479ff627cf40f91bdce3cd88aec', 'width': 2500}, 'variants': {}}]}
|
||
Microsoft Copilot in VS Code
| 1 |
[removed]
| 2025-04-21T09:43:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49kzd/microsoft_copilot_in_vs_code/
|
_Valdez
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49kzd
| false | null |
t3_1k49kzd
|
/r/LocalLLaMA/comments/1k49kzd/microsoft_copilot_in_vs_code/
| false | false |
self
| 1 | null |
Still no contestant to NeMo in the 12B range for RP?
| 28 |
I'm wondering what are y'all using for roleplay or ERP in that range. I've tested more than a hundred models and also fine-tunes of NeMo but not a single one has beaten Mag-Mell, a 1 yo fine-tune, for me, in storytelling, instruction following...
| 2025-04-21T09:51:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49p6y/still_no_contestant_to_nemo_in_the_12b_range_for/
|
Xhatz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49p6y
| false | null |
t3_1k49p6y
|
/r/LocalLLaMA/comments/1k49p6y/still_no_contestant_to_nemo_in_the_12b_range_for/
| false | false |
self
| 28 | null |
Multi GPU in Llama CPP
| 0 |
Hello, I just want to know if it is possible (with an acceptable performance) to use multi gpus in llama cpp with a decent performance.
Atm I have a rtx 3060 12gb and I'd wanted to add another one. I have everything set for using llama cpp and I would not want to switch to another backend because of the hustle to get it ported if the performance gain when using exllamav2 or vllm would be marginal.
| 2025-04-21T10:05:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49wiz/multi_gpu_in_llama_cpp/
|
Flashy_Management962
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49wiz
| false | null |
t3_1k49wiz
|
/r/LocalLLaMA/comments/1k49wiz/multi_gpu_in_llama_cpp/
| false | false |
self
| 0 | null |
llama.cpp way faster than exlv3?
| 0 |
i always heard exl was generally faster than llama.cpp especially with FA and such but today i set up my modded 3080ti 16gb card and did a test, qwen2.5-14b-instruct, 4.0bpw for exl3 (via oogabooga) and Q4\_K\_M for llama.cpp (via LM Studio) and threw the same prompt into both. exlv3 came out at 21.07 tokens per sec, llama.cpp threw out 40.73 tokens per sec
thats quite a stark difference and certainly not the result i was expecting. is this an issue with my setup or has llama.cpp just improved that much?
| 2025-04-21T10:07:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49xm0/llamacpp_way_faster_than_exlv3/
|
gaspoweredcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49xm0
| false | null |
t3_1k49xm0
|
/r/LocalLLaMA/comments/1k49xm0/llamacpp_way_faster_than_exlv3/
| false | false |
self
| 0 | null |
What is the best way to extract subtitle form video in 2025 ?
| 3 |
I am attempting to extract subtitles in my local language from a video. I used EasyOCR, but the accuracy is poor. Are there more effective techniques available, or would an LLM-based model be a better option?
| 2025-04-21T10:08:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49ydt/what_is_the_best_way_to_extract_subtitle_form/
|
Tomtun_rd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49ydt
| false | null |
t3_1k49ydt
|
/r/LocalLLaMA/comments/1k49ydt/what_is_the_best_way_to_extract_subtitle_form/
| false | false |
self
| 3 | null |
Gemma-3 does not support "system" role and it's a bigger problem than it seems.
| 1 |
[removed]
| 2025-04-21T10:11:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k49zoc/gemma3_does_not_support_system_role_and_its_a/
|
Future-Yellow-6034
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k49zoc
| false | null |
t3_1k49zoc
|
/r/LocalLLaMA/comments/1k49zoc/gemma3_does_not_support_system_role_and_its_a/
| false | false |
self
| 1 | null |
Good AI text-to-speech open-source with user-friendly UI?
| 1 |
[removed]
| 2025-04-21T10:17:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4a2tt/good_ai_texttospeech_opensource_with_userfriendly/
|
TimelyInevitable20
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4a2tt
| false | null |
t3_1k4a2tt
|
/r/LocalLLaMA/comments/1k4a2tt/good_ai_texttospeech_opensource_with_userfriendly/
| false | false |
self
| 1 | null |
From Code to Companion: Is Gen AI Becoming Our New Therapist and Life Coach?”
| 0 |
I just read a fascinating HBR piece by Marc Zao‑Sanders about how people are actually using generative AI in 2025 — and it’s not what you’d expect. While coding and “idea generation” are still around, the biggest use cases are all about personal support:
Therapy/Companion: AI as an emotional sounding board, helping with grief, anxiety, even self‑reflection.
Life Organization: From daily routines to to‑do lists, AI is becoming everyone’s personal planner.
Finding Purpose: People lean on AI to set goals, explore passions, and define what really matters.
Advanced Learning: Continuous skill‑building and self‑education remain huge.
Coding: Still a top professional use, but now outranked by these more human‑centered tasks.
Overall, “Personal & Professional Support” accounts for about 31% of all Gen AI use. It feels like we’re moving from novelty and entertainment toward deeper, more meaningful interactions.
What do you think—are we ready to trust AI with our emotions and life goals? Is this shift healthy, or are there risks in turning to algorithms for empathy and purpose? Would love to hear your experiences and thoughts!
| 2025-04-21T10:38:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ae8d/from_code_to_companion_is_gen_ai_becoming_our_new/
|
dai_app
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ae8d
| false | null |
t3_1k4ae8d
|
/r/LocalLLaMA/comments/1k4ae8d/from_code_to_companion_is_gen_ai_becoming_our_new/
| false | false |
self
| 0 | null |
Why is ollama bad?
| 0 |
I found this interesting discussion on a hackernews thread.
https://i.imgur.com/Asjv1AF.jpeg
Why is Gemma 3 27B QAT GGUF 22GB and not ~15GB when using ollama? I've also heard stuff like ollama is a bad llama.cpp wrapper in various threads across Reddit and X.com. What gives?
| 2025-04-21T10:44:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ahg4/why_is_ollama_bad/
|
sleekstrike
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ahg4
| false | null |
t3_1k4ahg4
|
/r/LocalLLaMA/comments/1k4ahg4/why_is_ollama_bad/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'hPUhPHrCHpnyuxxecJVT0yw0VqGd32mGHf2-9ogHdiY', 'resolutions': [{'height': 188, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=108&crop=smart&auto=webp&s=bb1c8c565f2a3a908e85fe4455a32c67a606d17a', 'width': 108}, {'height': 376, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=216&crop=smart&auto=webp&s=6c324a0e8099cd465a52c361fd345f949fc0bc1e', 'width': 216}, {'height': 557, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=320&crop=smart&auto=webp&s=8b0b126bde2d1218d4edf3d141b2e7a9ca322744', 'width': 320}, {'height': 1115, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=640&crop=smart&auto=webp&s=b54cec4b1b77ceecb4ba5f063ede396f165ffd3a', 'width': 640}, {'height': 1672, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=960&crop=smart&auto=webp&s=641ca5ab6182429ee2ced07b83dc304d5e61b7b5', 'width': 960}, {'height': 1882, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?width=1080&crop=smart&auto=webp&s=6723125cdd5878ca5074b32354fd76e6f3ac3a10', 'width': 1080}], 'source': {'height': 1882, 'url': 'https://external-preview.redd.it/xWkhKt3zsR16jV-NT6qwwQEfIZ2mtIoG7SXqNuCwB80.jpg?auto=webp&s=11bbca8369ddce01483f83d7b04d999224d8dc69', 'width': 1080}, 'variants': {}}]}
|
Build a Production Vertical AI Agent
| 0 |
Getting the architecture of Production level Agents (Multiple LLMs working together to accomplish a task) is extremely hard. So, this is going to be a rather long post about what we learned while making one for Peneterrer.
# Background:
So, we had to design an “Agent” that could carry out penetration testing completely on its own - no human intervention at all. And just a heads up — the agent we ended up building wasn’t a single LLM call or even a bunch of them chained together with tools/functions. What we built is made up of exactly 29 super niched-down agents, all separate from the main flow (Main AI Agent) — each handling the most atomic task it can, with around 95%–99% accuracy.
# Our learnings:
1. **Don’t use LLMs where they are not required**
\- Don’t use Agents or LLMs where a simple script can do the job. For our case, 70% of “Reconnaissance” (The first step of Pentesting) is automated by a script and for the remaining 30% we have our agents.
2. **Break the main goal into as many small goals as possible**
\- Do not try to accomplish a crazy big task like – Generating a marketing campaign – in one go. You will deal with the worst possible accuracy. Divide the foreseeable goals into atomic steps/tasks which are then assigned to specific agents finetuned for that specific task (or prompted for that task). And for goals/tasks that are unpredictable - ask the LLM itself to divide the steps into the smallest and easiest tasks possible.
3. **LangChain, Crew AI, AutoGen will not work for your use case**
\- If you are building something unique – chances are these frameworks won’t help you much and will waste your time more. At Peneterrer, we don’t use any LLM orchestration frameworks – but rather we have developed our own internal “LLM Orchestration” system made specifically for our use case. Some things just can’t be abstracted away!
4. **Fail ASAP**
\- This is especially for the vibe coders out there. If you are not anticipating something (different LLM Output, responses from different APIs, etc) – Fail the application immediately. This will save you time and compute. Check your Claude-copied code whether it follows this principle or not. LLMs kinda suck at generating code for LLM Orchestration.
5. **Try new things – Absurdity Wins (sometimes)**
\- For all our agents we don’t use the built-in tool calls or function call mechanism. Instead, we’ve built our own different system which has increased the accuracy and efficiency of our system by folds! (something as simple as this allows you to have more control than the protocols set by companies). So, try different things - something might just work.
Let me know if you’d like another post on this topic! If you’re curious about how Peneterrer works, you can try it out here –[https://peneterrer.com](https://peneterrer.com).
| 2025-04-21T10:53:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4amvw/build_a_production_vertical_ai_agent/
|
Illustrious-Ad-497
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4amvw
| false | null |
t3_1k4amvw
|
/r/LocalLLaMA/comments/1k4amvw/build_a_production_vertical_ai_agent/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'BY4nyvKkLKNtqTryiX3b2MzLl-9ayQX-mPMQpquibMg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=108&crop=smart&auto=webp&s=f93d78661852003769f8a2ed1143df283b4c86ee', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=216&crop=smart&auto=webp&s=d08b960a13fa485e1816e24c45c42222d1d26f75', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=320&crop=smart&auto=webp&s=d66c62c82592b7986be6c8380c5ed02f49512dec', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=640&crop=smart&auto=webp&s=31dad123ea73453cb1ed1bdf5895f666afe3fbcb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=960&crop=smart&auto=webp&s=6a479d08fbf2eba879e256b036c2176acede8cfa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?width=1080&crop=smart&auto=webp&s=0bfe2e9bcb2c2e7c3c4fe8bc0e38bc84644a07f2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/XceKwXolk8xfrib3DBRi4Eb9hlDkUcengeHSGYmwD-Q.jpg?auto=webp&s=e6328abb51a8cadc872e46557cf4387af77e34c5', 'width': 1200}, 'variants': {}}]}
|
What's the best models available today to run on systems with 8 GB / 16 GB / 24 GB / 48 GB / 72 GB / 96 GB of VRAM today?
| 324 |
As the title says, since many aren't that experienced with running local LLMs and the choice of models, what are the best models available today for the different ranges of VRAM?
| 2025-04-21T11:08:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4avlq/whats_the_best_models_available_today_to_run_on/
|
Severin_Suveren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4avlq
| false | null |
t3_1k4avlq
|
/r/LocalLLaMA/comments/1k4avlq/whats_the_best_models_available_today_to_run_on/
| false | false |
self
| 324 | null |
I built a Local AI Voice Assistant with Ollama + gTTS with interruption
| 33 |
Hey everyone! I just built OllamaGTTS, a lightweight voice assistant that brings AI-powered voice interactions to your local Ollama setup using Google TTS for natural speech synthesis. It’s fast, interruptible, and optimized for real-time conversations. I am aware that some people prefer to keep everything local so I am working on an update that will likely use Kokoro for local speech synthesis. I would love to hear your thoughts on it and how it can be improved.
Key Features
* Real-time voice interaction (Silero VAD + Whisper transcription)
* Interruptible speech playback (no more waiting for the AI to finish talking)
* FFmpeg-accelerated audio processing (optional speed-up for faster * replies)
* Persistent conversation history with configurable memory
[GitHub Repo: https://github.com/ExoFi-Labs/OllamaGTTS](https://github.com/ExoFi-Labs/OllamaGTTS)
Instructions:
1. Clone Repo
2. Install requirements
3. Run ollama_gtts.py
I am working on integrating Kokoro STT at the moment, and perhaps Sesame in the coming days.
| 2025-04-21T11:26:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4b5xl/i_built_a_local_ai_voice_assistant_with_ollama/
|
typhoon90
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4b5xl
| false | null |
t3_1k4b5xl
|
/r/LocalLLaMA/comments/1k4b5xl/i_built_a_local_ai_voice_assistant_with_ollama/
| false | false |
self
| 33 |
{'enabled': False, 'images': [{'id': 'UcGk9p5cL164Y_lzkrBV6zQ4hv4jvHmvrVt2Codp_UM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=108&crop=smart&auto=webp&s=5724fbcf4d62ba80035324e43fa8353207271965', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=216&crop=smart&auto=webp&s=f963c7be8897f394fd9efbcd6a9a9924fa28adcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=320&crop=smart&auto=webp&s=2f8fbeac877f599c30691113588a9be9efb1e1e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=640&crop=smart&auto=webp&s=fde7fe2d35ecfb9ab844ff8a743b2f4a2e7afc4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=960&crop=smart&auto=webp&s=8bc9201771c45e51cba611ecab4320c457ca5802', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?width=1080&crop=smart&auto=webp&s=4554f1f5a9c11e041087bafd776aac1e7ed0de91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yfZUgc-LdYA_-IGIncq1jMD0gCxwKEftvCJxQjY7Cj0.jpg?auto=webp&s=0dc1f3852fffcd79972a8ed0f3c07fa795dfb3f5', 'width': 1200}, 'variants': {}}]}
|
CPU only options
| 2 |
Are there any decent options out there for CPU only models? I run a small homeland and have been considering a GPU to host a local LLM. The use cases are largely vibe coding and general knowledge for a smart home.
However I have bags of surplus CPU doing very little. A GPU would also likely take me down the route of motherboard upgrades and potential PSU upgrades.
Seeing the announcement from Microsoft re CPU only models got me looking for others without success. Is this only a recent development or am I missing a trick?
Thanks all
| 2025-04-21T11:41:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4bf3x/cpu_only_options/
|
boxcorsair
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4bf3x
| false | null |
t3_1k4bf3x
|
/r/LocalLLaMA/comments/1k4bf3x/cpu_only_options/
| false | false |
self
| 2 | null |
Anything similar to Unstract/LLMWhisperer for complex OCR?
| 1 |
[removed]
| 2025-04-21T11:50:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4bkl3/anything_similar_to_unstractllmwhisperer_for/
|
HonestAdvice6311
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4bkl3
| false | null |
t3_1k4bkl3
|
/r/LocalLLaMA/comments/1k4bkl3/anything_similar_to_unstractllmwhisperer_for/
| false | false |
self
| 1 | null |
🚀 Run LightRAG on a Bare Metal Server in Minutes (Fully Automated)
| 73 |
Continuing my journey documenting self-hosted AI tools - today I’m dropping a new tutorial on how to run the amazing LightRAG project on your own bare metal server with a GPU… in just minutes 🤯
Thanks to full automation (Ansible + Docker Compose + Sbnb Linux), you can go from an empty machine with no OS to a fully running RAG pipeline.
TL;DR: Start with a blank PC with a GPU. End with an advanced RAG system, ready to answer your questions.
Tutorial link:
https://github.com/sbnb-io/sbnb/blob/main/README-LightRAG.md
Happy experimenting! Let me know if you try it or run into anything.
| 2025-04-21T11:51:54 |
https://www.reddit.com/gallery/1k4blth
|
aospan
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4blth
| false | null |
t3_1k4blth
|
/r/LocalLLaMA/comments/1k4blth/run_lightrag_on_a_bare_metal_server_in_minutes/
| false | false | 73 | null |
|
Local RAG tool that doesn't use embedding
| 8 |
RAG - retrieval augmented generation - involves searching for relevant information, and adding it to the context, before starting the generation.
It seems most RAG tools use embedding and similaroty search to find relevant information. Are there any RAG tools that use other kind of search/information retirieval?
| 2025-04-21T12:28:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4caaw/local_rag_tool_that_doesnt_use_embedding/
|
lily_34
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4caaw
| false | null |
t3_1k4caaw
|
/r/LocalLLaMA/comments/1k4caaw/local_rag_tool_that_doesnt_use_embedding/
| false | false |
self
| 8 | null |
How to get complete JSON output with guided generation in vLLM (Qwen2.5-VL)?
| 1 |
[removed]
| 2025-04-21T12:38:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ch5g/how_to_get_complete_json_output_with_guided/
|
RedotOoO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ch5g
| false | null |
t3_1k4ch5g
|
/r/LocalLLaMA/comments/1k4ch5g/how_to_get_complete_json_output_with_guided/
| false | false |
self
| 1 | null |
2 or 3 5060 ti's vs a 3090
| 2 |
Ignoring MSRP since it is a pipe dream, and considering that VRAM is the absolute most important factor on whether you can run a model or not, would it be wise to get multiple 5060 ti's as opposed to getting a single 3090? is there some factor im missing? for 66% of the price i can get 50% more vram.
||3090|5060 ti 16gb|
|:-|:-|:-|
|vram|24|16|
|price|1500|500|
|memory bandwidth|930|440|
|tensor cores|328|144|
|tdp|350|165|
| 2025-04-21T12:42:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ck2k/2_or_3_5060_tis_vs_a_3090/
|
EsotericAbstractIdea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ck2k
| false | null |
t3_1k4ck2k
|
/r/LocalLLaMA/comments/1k4ck2k/2_or_3_5060_tis_vs_a_3090/
| false | false |
self
| 2 | null |
How to get complete JSON output with guided generation in vLLM (Qwen2.5-VL)?
| 1 |
[removed]
| 2025-04-21T12:51:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4cqnf/how_to_get_complete_json_output_with_guided/
|
RedotOoO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4cqnf
| false | null |
t3_1k4cqnf
|
/r/LocalLLaMA/comments/1k4cqnf/how_to_get_complete_json_output_with_guided/
| false | false |
self
| 1 | null |
How to get complete JSON output with guided generation in vLLM (Qwen2.5-VL)?
| 1 |
[removed]
| 2025-04-21T13:11:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4d631/how_to_get_complete_json_output_with_guided/
|
RedotOoO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4d631
| false | null |
t3_1k4d631
|
/r/LocalLLaMA/comments/1k4d631/how_to_get_complete_json_output_with_guided/
| false | false |
self
| 1 | null |
Which ollama model would you choose for chatbot ?
| 1 |
[removed]
| 2025-04-21T13:13:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4d771/which_ollama_model_would_you_choose_for_chatbot/
|
Effective_Budget7594
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4d771
| false | null |
t3_1k4d771
|
/r/LocalLLaMA/comments/1k4d771/which_ollama_model_would_you_choose_for_chatbot/
| false | false |
self
| 1 | null |
Censorship in Closed-Source Models
| 5 |
Ask Gemini about anyone related to modern politics, and this is the answer. Only further demonstrates the increasing importance of local models
| 2025-04-21T13:25:48 |
Unable-Ad-7391
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4dgyf
| false | null |
t3_1k4dgyf
|
/r/LocalLLaMA/comments/1k4dgyf/censorship_in_closedsource_models/
| false | false | 5 |
{'enabled': True, 'images': [{'id': 'vmpTGgQRyzz6m2ZZMKDrC5QaKW3UAsg2pCNp3PU_9NQ', 'resolutions': [{'height': 203, 'url': 'https://preview.redd.it/xe2i924yv6we1.jpeg?width=108&crop=smart&auto=webp&s=28592e44748c6b7fb62b9a3305b8cd7c90afc849', 'width': 108}, {'height': 406, 'url': 'https://preview.redd.it/xe2i924yv6we1.jpeg?width=216&crop=smart&auto=webp&s=16af89aaafafe4de43aa80b32ba7c137653ffd81', 'width': 216}, {'height': 602, 'url': 'https://preview.redd.it/xe2i924yv6we1.jpeg?width=320&crop=smart&auto=webp&s=3d75d9bb14bb7afc755d90502ac1afac47d88658', 'width': 320}, {'height': 1205, 'url': 'https://preview.redd.it/xe2i924yv6we1.jpeg?width=640&crop=smart&auto=webp&s=d8f63f36b7237941c4779751b953355b73add20d', 'width': 640}], 'source': {'height': 1559, 'url': 'https://preview.redd.it/xe2i924yv6we1.jpeg?auto=webp&s=e86407762ace7761f8f7f4bb7fadba7aae12b63d', 'width': 828}, 'variants': {}}]}
|
||
The AI team at Google have reached the surprising conclusion that quantizing weights from 16-bits to 4-bits leads to a 4x reduction of VRAM usage!
| 488 | 2025-04-21T14:01:45 |
vibjelo
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4e9pd
| false | null |
t3_1k4e9pd
|
/r/LocalLLaMA/comments/1k4e9pd/the_ai_team_at_google_have_reached_the_surprising/
| false | false | 488 |
{'enabled': True, 'images': [{'id': 'tytZn_seqHPQBQj9fdGfcjX1rBjDmdmDjSNWjdBCTXc', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=108&crop=smart&auto=webp&s=d5e35754e343918b651d69018a75769446798ab5', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=216&crop=smart&auto=webp&s=259b1701376b6e54e57caa4a98a039aba8492a05', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=320&crop=smart&auto=webp&s=beee246f7fa10f72a194ffead95f4f88d9cfc425', 'width': 320}, {'height': 519, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=640&crop=smart&auto=webp&s=beae2c8b953dad50af5fad36e7f92570c2853197', 'width': 640}, {'height': 779, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=960&crop=smart&auto=webp&s=8e832505c70c1abf5cfc7cea6a427ea8ee95dca8', 'width': 960}, {'height': 876, 'url': 'https://preview.redd.it/flecddmd27we1.png?width=1080&crop=smart&auto=webp&s=f4e7d6dc851067c3dc6177220b418d5dc77ae91f', 'width': 1080}], 'source': {'height': 1598, 'url': 'https://preview.redd.it/flecddmd27we1.png?auto=webp&s=0bc7cacfe6f7fc918dbb644bc08734f66f15d634', 'width': 1968}, 'variants': {}}]}
|
|||
CPU-only benchmarks - AM5/DDR5
| 4 |
I'd be curious to know how far you can go running LLMs on DDR5 / AM5 CPUs .. I still have an AM4 motherboard in my x86 desktop PC (i run LLMs & diffusion models on a 4090 in that, and use an apple machine as a daily driver)
I'm deliberating on upgrading to a DDR5/AM5 motherboard (versus other options like waiting for these strix halo boxes or getting a beefier unified memory apple silicon machine etc).
I'm aware you can also run an LLM split between CPU & GPU .. i'd still like to know CPU only benchmarks for say Gemma3 4b , 12b, 27b (from what I've seen of 8b's on CPU, i'm thinking 12b might be passable)
being able to run a 12b with large context in cheap CPU memory might be interesting I guess?
| 2025-04-21T14:02:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ea74/cpuonly_benchmarks_am5ddr5/
|
dobkeratops
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ea74
| false | null |
t3_1k4ea74
|
/r/LocalLLaMA/comments/1k4ea74/cpuonly_benchmarks_am5ddr5/
| false | false |
self
| 4 | null |
Local LLM performance results on Raspberry Pi devices
| 26 |
**Method (very basic):**
I simply installed Ollama and downloaded some small models (listed in the table) to my Raspberry Pi devices, which have a clean Raspbian OS (lite) 64-bit OS, nothing else installed/used. I run models with the "--verbose" parameter to get the performance value after each question. I asked 5 same questions to each model and took the average.
**Here are the results:**
https://preview.redd.it/igp229o077we1.png?width=1348&format=png&auto=webp&s=aa65f28686c212c76f04c344ea767b20cdbe2196
***If you have run a local model on a Raspberry Pi device, please share the model and the device variant with its performance result.***
| 2025-04-21T14:36:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4f3a2/local_llm_performance_results_on_raspberry_pi/
|
fatihustun
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4f3a2
| false | null |
t3_1k4f3a2
|
/r/LocalLLaMA/comments/1k4f3a2/local_llm_performance_results_on_raspberry_pi/
| false | false | 26 | null |
|
How should I proceed with these specs?
| 0 |
Hello! Longtime LLM user, but cut my subscriptions to GPT, CLAUDE, ELEVENLABS, and a couple others to save some money. Setting up some local resources to help me save some money and have more reliability with my AI assistance. I mostly use AI llm's for coding assistance, so I am looking for the best 1 or 2 models for some advanced coding projects (multi file, larger file size, 3,000+ lines).
Im just new to all of this, so I am not sure which models to install with ollama.
Here are my pc specs:
RAM: 32GB GSKILL TRIDENT Z - 6400MHZ
CPU: I7 13700K - Base Clock
GPU: NVIDIA 4090 FE - 24GB VRAM
| 2025-04-21T14:38:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4f4g5/how_should_i_proceed_with_these_specs/
|
exogreek
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4f4g5
| false | null |
t3_1k4f4g5
|
/r/LocalLLaMA/comments/1k4f4g5/how_should_i_proceed_with_these_specs/
| false | false |
self
| 0 | null |
RAG retrieval slows down as knowledge base grows - Anyone solve this at scale?
| 19 |
Here’s my dilemma. My RAG is dialed in and performing great in the relevance department, but it seems like as we add more documents to our knowledge base, the overall time from prompt to result gets slower and slower. My users are patient, but I think asking them to wait any longer than 45 seconds per prompt is too long in my opinion. I need to find something to improve RAG retrieval times.
Here’s my setup:
- Open WebUI (latest version) running in its own Azure VM (Dockerized)
- Ollama running in its own GPU-enabled VM in Azure (with dual H100s)
- QwQ 32b FP16 as the main LLM
- Qwen 2.5 1.5b FP16 as the task model (chat title generation, Retrieval Query gen, web query gen, etc)
- Nomic-embed-text for embedding model (running on Ollama Server)
- all-MiniLM-L12-v2 for reranking model for hybrid search (running on the OWUI server because you can’t run a reranking model on Ollama using OWUI for some unknown reason)
RAG Embedding / Retrieval settings:
- Vector DB = ChromaDB using default Open WebUI settings (running inside the OWUI Docker container)
- Chunk size = 2000
- Chunk overlap = 500 (25% of chunk size as is the accepted standard)
- Top K = 10
- Too K Reranker = 10
- Relevance Threshold = 0
- RAG template = OWUI 0.6.5 default RAG prompt template
- Full Context Mode = OFF
- Content Extraction Engine = Apache Tika
Knowledgebase details:
- 7 separate document collections containing approximately 400 total PDFS and TXT files between 100k to 3mb each. Most average around 1mb.
Again, other than speed, my RAG is doing very well, but our knowledge bases are going to have a lot more documents in them soon and I can’t have this process getting much slower or I’m going to start getting user complaints.
One caveat: I’m only allowed to run Windows-based servers, no pure Linux VMs are allowed in my organization. I can run WSL though, just not standalone Linux. So vLLM is not currently an option.
For those running RAG at “production” scale, how do you make it fast without going to 3rd party services? I need to keep all my RAG knowledge bases “local” (within my own private tenant).
| 2025-04-21T14:45:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4favx/rag_retrieval_slows_down_as_knowledge_base_grows/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4favx
| false | null |
t3_1k4favx
|
/r/LocalLLaMA/comments/1k4favx/rag_retrieval_slows_down_as_knowledge_base_grows/
| false | false |
self
| 19 | null |
Dillon - The Ultimate Writing Partner
| 1 |
I'm back with another open source productivity app for Ollama and LM Studio users.
What's the matter? The standard word processor got you pushing too many pixels?
Dillon is no ordinary word processor. It's a high-powered, combat-ready writing machine that will blow your documents away!
In a world where writing tools have gone soft Dillon is here to rescue your writing process with:
**BADASS ORGANIZATION** \- Folders, documents, and research materials organized like a well-executed mission
**RAW FIREPOWER EDITING** \- Rich text formatting that hits hard and never misses
**AI BACKUP TEAM** \- Choose from elite specialists like yours truly, Apollo, Vader, Ripley, and more to watch your back and comment on your writing! Special guest appearance from Quentin himself! Need more backup, create your own with the Character Manager!
**TACTICAL EXPORTS** \- Multiple export formats to deploy your content anywhere. Dillon exports to Open Document, PDF, Plain Text and for you screenwriters\*\* - out there Final Draft format
**SECURE THE BASE** \- Your documents are auto-saved while you work and back ups are stored in your home folder
**STEALTH SEARCH & REPLACE** \- Track down and eliminate problems with surgical precision
Get Dillon from the Github repo today. macOS and Windows builds are ready.
[https://github.com/shokuninstudio/Dillon](https://github.com/shokuninstudio/Dillon)
As Dutch used to say 'Do it! Dot it now!'
| 2025-04-21T14:51:00 |
https://www.reddit.com/gallery/1k4ffgd
|
shokuninstudio
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ffgd
| false | null |
t3_1k4ffgd
|
/r/LocalLLaMA/comments/1k4ffgd/dillon_the_ultimate_writing_partner/
| false | false | 1 | null |
|
Noob request: Coding model for specific framework
| 3 |
I'm looking for a pre-trained model to help me coding, either with fresh knowledge or that can be able to be updated.
I'm aware of Gemini of Claude are the best AI services for coding, but I get frustrated anytime I ask them to write for the latest framework version I'm working on. I tried adding the latest official documentation, but I'm my case, it's been worthless (probabbly my fault for not understand how it works).
I know the basics for RAG, but before going deeper in that, I want to check if there is any alternative.
| 2025-04-21T14:58:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4fm5q/noob_request_coding_model_for_specific_framework/
|
rodlib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4fm5q
| false | null |
t3_1k4fm5q
|
/r/LocalLLaMA/comments/1k4fm5q/noob_request_coding_model_for_specific_framework/
| false | false |
self
| 3 | null |
The age of AI is upon us and obviously what everyone wants is an LLM-powered unhelpful assistant on every webpage, so I made a Chrome extension
| 48 |
TL;DR: someone at work made a joke about creating a really unhelpful Clippy-like assistant that exclusively gives you weird suggestions, one thing led to another and I ended up making a whole Chrome extension.
It was part me having the habit of transforming throwaway jokes into very convoluted projects, part a ✨ViBeCoDiNg✨ exercise, part growing up in the early days of the internet, where stuff was just dumb/fun for no reason (I blame Johnny Castaway and those damn Macaronis dancing Macarena).
You'll need either Ollama (lets you pick any model, send in page context) or a Gemini API key (likely better/more creative performance, but only reads the URL of the tab).
Full source here: [https://github.com/yankooliveira/toads](https://github.com/yankooliveira/toads)
Enjoy!
| 2025-04-21T15:09:57 |
zanatas
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4fwck
| false | null |
t3_1k4fwck
|
/r/LocalLLaMA/comments/1k4fwck/the_age_of_ai_is_upon_us_and_obviously_what/
| false | false | 48 |
{'enabled': True, 'images': [{'id': 'F5WR7kWGzCVepUHFJJJwIow7MQfQVip4PjDptqy-ZNY', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=108&crop=smart&format=png8&s=c53954a3a1aadeb4873fa51333049290465438ca', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=216&crop=smart&format=png8&s=b58cfc667b81f70c95b2ef6e8d0ac301d364853d', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=320&crop=smart&format=png8&s=c315441559b48052bff9f47b4702dcb89852c847', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=640&crop=smart&format=png8&s=654dc81e873f45c7c8b4bd03d574a80a21fbab93', 'width': 640}], 'source': {'height': 507, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?format=png8&s=78c0f19381c5329d5318d0bb3dc7d13396a74f1e', 'width': 900}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=108&crop=smart&s=5d7faa8a25e13c23f2df6a2a01c5f6489e9e158f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=216&crop=smart&s=49f516ce86912b52cfb42e21d5dec6630f1ef40c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=320&crop=smart&s=25d9eb64765c78c75e36f68ec642340731370299', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=640&crop=smart&s=083ba989dbd3b12db4098ac42b096e6b6d88fee0', 'width': 640}], 'source': {'height': 507, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?s=8fe2b25ed8d86de29f404925698ed3a360ec9629', 'width': 900}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=108&format=mp4&s=1cf2fa6fe53e9557180adac7f0536e83817c7185', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=216&format=mp4&s=fcd7ec3c293ffdcfd4945eb97fe337ffe8382888', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=320&format=mp4&s=594aa76f19e0369262ce6a7bbfd755228fa3d082', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?width=640&format=mp4&s=1b2357857eb97daa4f5c0c1959528c88e58abe69', 'width': 640}], 'source': {'height': 507, 'url': 'https://preview.redd.it/v6dt6jrre7we1.gif?format=mp4&s=4592633ed86e9cd6b7ab3e965b9b6ca12ff0f7cd', 'width': 900}}}}]}
|
||
Niche Model of the Day: Openbuddy 25.2q, QwQ 32B with Quantization Aware Training
| 1 | 2025-04-21T15:14:47 |
https://huggingface.co/OpenBuddy/openbuddy-qwq-32b-v25.2q-200k
|
Downtown-Case-1755
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4g0kj
| false | null |
t3_1k4g0kj
|
/r/LocalLLaMA/comments/1k4g0kj/niche_model_of_the_day_openbuddy_252q_qwq_32b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ia-RhtpSrMi3oh7nYKOCoFIErfLD_44dodtkdGOXCSs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=108&crop=smart&auto=webp&s=62b8e6532e994cf4812a01de158a4f1a4f90e230', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=216&crop=smart&auto=webp&s=748b3a36bc1cb1058a8d5e66fa0b98e158a36680', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=320&crop=smart&auto=webp&s=11a04e6f8c578828aec804d31c556aba73a93f95', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=640&crop=smart&auto=webp&s=7b4d5318c276ebf4381618c4da3d982fca789d28', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=960&crop=smart&auto=webp&s=aa878281a65c31467f213de7fb9f477b7fa8d489', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?width=1080&crop=smart&auto=webp&s=05276ab46687d8e1c8f5b37e55575eec1f3bdee9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EVxaSqb6Sg6R96rxKHrdpTGzwuJ7selLPeXxshW9YX0.jpg?auto=webp&s=bacd4357b23bf58459ccfd4741fe22d483ec7fbb', 'width': 1200}, 'variants': {}}]}
|
||
What is stopping me from fully switching to local
| 1 |
[removed]
| 2025-04-21T15:18:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4g45g/what_is_stopping_me_from_fully_switching_to_local/
|
00quebec
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4g45g
| false | null |
t3_1k4g45g
|
/r/LocalLLaMA/comments/1k4g45g/what_is_stopping_me_from_fully_switching_to_local/
| false | false |
self
| 1 | null |
Trying to add emotion conditioning to Gemma-3
| 18 |
Hey everyone,
I was curious to make LLM influenced by something more than just the text, so I made a small attempt to add emotional input to smallest Gemma-3-1B, which is honestly pretty inconsistent, and it was only trained on short sequences of synthetic dataset with emotion markers.
The idea: alongside text there is an emotion vector, and it trainable projection then added to the token embeddings before they go into the transformer layers, and trainable LoRA is added on top.
Here are some (cherry picked) results, generated per same input/seed/temp but with different joy/sadness. I found them kind of intriguing to share (even though the dataset looks similar)
My question is has anyone else has played around with similar conditioning? Does this kind approach even make much sense to explore further? I mostly see RP-finetunes when searching for existing emotion models.
Curious to hear any thoughts
| 2025-04-21T15:37:25 |
https://www.reddit.com/gallery/1k4gl0k
|
FOerlikon
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4gl0k
| false | null |
t3_1k4gl0k
|
/r/LocalLLaMA/comments/1k4gl0k/trying_to_add_emotion_conditioning_to_gemma3/
| false | false | 18 | null |
|
GLM-4 32B is mind blowing
| 1 |
*Processing video 815w430kg7we1...*
Title says it all. I tested out GLM-4 32B Q8 locally using PiDack's llama.cpp pr (https://github.com/ggml-org/llama.cpp/pull/12957/) as ggufs are currently broken.
I am absolutely amazed by this model. It outperforms every single other \~32B local model and even outperforms 72B models. It's literally Gemini 2.5 flash (non reasoning) at home, but better. It's also fantastic with tool calling and works well with cline/aider.
But the thing I like the most is that this model is not afraid to output a lot of code. It does not truncate anything or leave out implementation details. Below I will provide an example where it 0-shot produced 630 lines of code (I had to ask it to continue because the response got cut off at line 550). I have no idea how they trained this, but I am really hoping qwen 3 does something similar.
Below are some examples of 0 shot requests comparing GLM 4 versus gemini 2.5 flash (non-reasoning). GLM is run locally with temp 0.6 and top\_p 0.95 at Q8. Output speed is 22t/s for me on 3x 3090.
**Solar system**
prompt: Create a realistic rendition of our solar system using html, css and js. Make it stunning! reply with one file.
Gemini response:
*Processing video vhn6r9kmi7we1...*
GLM response:
*Processing video ylcl9s4ri7we1...*
**Neural network visualization**
prompt: code me a beautiful animation/visualization in html, css, js of how neural networks learn. Make it stunningly beautiful, yet intuitive to understand. Respond with all the code in 1 file. You can use threejs
Gemini:
*Processing video nkgj1wc1j7we1...*
GLM 4:
*Processing video equidag5j7we1...*
I also did a few other prompts and GLM generally outperformed gemini on most tests. Note that this is only Q8, I imaging full precision might be even a little better.
Please share your experiences or examples if you have tried the model. I havent tested the reasoning variant yet, but I imagine its also very good.
| 2025-04-21T15:41:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4go9m/glm4_32b_is_mind_blowing/
|
Timely_Second_6414
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4go9m
| false | null |
t3_1k4go9m
|
/r/LocalLLaMA/comments/1k4go9m/glm4_32b_is_mind_blowing/
| false | false |
self
| 1 | null |
GLM-4 32B is mind blowing
| 577 |
[GLM-4 32B pygame earth simulation, I tried this with gemini 2.5 flash which gave an error as output.](https://reddit.com/link/1k4god7/video/815w430kg7we1/player)
Title says it all. I tested out GLM-4 32B Q8 locally using PiDack's llama.cpp pr (https://github.com/ggml-org/llama.cpp/pull/12957/) as ggufs are currently broken.
I am absolutely amazed by this model. It outperforms every single other \~32B local model and even outperforms 72B models. It's literally Gemini 2.5 flash (non reasoning) at home, but better. It's also fantastic with tool calling and works well with cline/aider.
But the thing I like the most is that this model is not afraid to output a lot of code. It does not truncate anything or leave out implementation details. Below I will provide an example where it 0-shot produced 630 lines of code (I had to ask it to continue because the response got cut off at line 550). I have no idea how they trained this, but I am really hoping qwen 3 does something similar.
Below are some examples of 0 shot requests comparing GLM 4 versus gemini 2.5 flash (non-reasoning). GLM is run locally with temp 0.6 and top\_p 0.95 at Q8. Output speed is 22t/s for me on 3x 3090.
**Solar system**
prompt: Create a realistic rendition of our solar system using html, css and js. Make it stunning! reply with one file.
Gemini response:
[Gemini 2.5 flash: nothing is interactible, planets dont move at all](https://reddit.com/link/1k4god7/video/vhn6r9kmi7we1/player)
GLM response:
[GLM-4-32B response. Sun label and orbit rings are off, but it looks way better and theres way more detail.](https://reddit.com/link/1k4god7/video/ylcl9s4ri7we1/player)
**Neural network visualization**
prompt: code me a beautiful animation/visualization in html, css, js of how neural networks learn. Make it stunningly beautiful, yet intuitive to understand. Respond with all the code in 1 file. You can use threejs
Gemini:
[Gemini response: network looks good, but again nothing moves, no interactions.](https://reddit.com/link/1k4god7/video/nkgj1wc1j7we1/player)
GLM 4:
[GLM 4 response \(one shot 630 lines of code\): It tried to plot data that will be fit on the axes. Although you dont see the fitting process you can see the neurons firing and changing in size based on their weight. Theres also sliders to adjust lr and hidden size. Not perfect, but still better.](https://reddit.com/link/1k4god7/video/equidag5j7we1/player)
I also did a few other prompts and GLM generally outperformed gemini on most tests. Note that this is only Q8, I imaging full precision might be even a little better.
Please share your experiences or examples if you have tried the model. I havent tested the reasoning variant yet, but I imagine its also very good.
| 2025-04-21T15:41:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4god7/glm4_32b_is_mind_blowing/
|
Timely_Second_6414
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4god7
| false | null |
t3_1k4god7
|
/r/LocalLLaMA/comments/1k4god7/glm4_32b_is_mind_blowing/
| false | false | 577 |
{'enabled': False, 'images': [{'id': 'jDpz0cC1AessM5xjJz4BWKmCkoWwz3aehMgZo8bR8Dc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=108&crop=smart&auto=webp&s=3a2b7d3d4baf75acdda5fee15c59344beda2582d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=216&crop=smart&auto=webp&s=576f80156b6a38ceca7d3446bcc2c8001fca04c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=320&crop=smart&auto=webp&s=908867a9990b25161a44bed4f30bf32a173d6bfd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=640&crop=smart&auto=webp&s=128cb2de26af5c23c04d8bec8b39d61ce2d36274', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=960&crop=smart&auto=webp&s=350e6ff8b15a6d7ac2741893c10f6014e6484a03', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?width=1080&crop=smart&auto=webp&s=496f1167ba6b73e9735bb56a0fea7d22aa632d4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KGA1Keg1D7oCkdV6UW_ifq_mQe-5jNP1DvhwwJ2Stbs.jpg?auto=webp&s=0cbc2cceec75a3d324031f3a44873443d82951fe', 'width': 1200}, 'variants': {}}]}
|
|
[llama.cpp git] mtmd: merge llava, gemma3 and minicpmv CLI into single llama-mtmd-cli
| 88 | 2025-04-21T15:46:41 |
https://github.com/ggml-org/llama.cpp/commit/84a9bf2fc2875205f0806fbbfbb66dc67204094c
|
FastDecode1
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4gqje
| false | null |
t3_1k4gqje
|
/r/LocalLLaMA/comments/1k4gqje/llamacpp_git_mtmd_merge_llava_gemma3_and_minicpmv/
| false | false | 88 |
{'enabled': False, 'images': [{'id': 'FfiXCCvCjFLMTs0HT8L7QdWMV_p1eC3Ga4dMA0JOSeM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=108&crop=smart&auto=webp&s=ee5695ed0caeb954fa6ebab1eceacccfbb7d06be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=216&crop=smart&auto=webp&s=2176df18bc127fde2a618646bbb2190b419bf3a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=320&crop=smart&auto=webp&s=4f75e8174d1643dec3caf3aaf8aeee5bcb6a9904', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=640&crop=smart&auto=webp&s=c97a934134489f4b1d7e573c0218731d1a8a5d5b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=960&crop=smart&auto=webp&s=d5441a3c331b9a7a5ccde539acf7f1729a08c228', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?width=1080&crop=smart&auto=webp&s=6475b4a625b9320c41a56d7e320a567b4ac1da38', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/chyB3Fwy2UcKLBJMyzabSe7PfMaM2G1ZJw5k660LQOY.jpg?auto=webp&s=86a5aa626abd3db5a01b896704d2a6baf660c6c3', 'width': 1200}, 'variants': {}}]}
|
||
Try Bit_Net on colab!
| 5 |
I created a simple Jupyter notebook on Google Colab for those who would like to test Microsoft’s new BitNet model:
[Link to GitHub](https://github.com/vini-muchulski/Bit-Net-microsoft/tree/main)
| 2025-04-21T16:12:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4h94y/try_bit_net_on_colab/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4h94y
| false | null |
t3_1k4h94y
|
/r/LocalLLaMA/comments/1k4h94y/try_bit_net_on_colab/
| false | false |
self
| 5 | null |
Budget Dual 3090 Build Advice
| 3 |
Okay, I have been all through the posts on here about 3090 builds and a lot of the detailed advice is from 10+ months ago and it seems prices have shifted a lot. I have two 3090's from prior computer builds that I am looking to consolidate into a rig for running a local AI stack and get far better performance than my existing single-3090 rig. I should say that I have no experience with server- or workstation-class hardware (e.g. Xeon or Epyc machines).
I'd like the ability to expand in the future if I can pickup additional cards at relatively cheap prices. I'm also looking for a build that's as compact as possible--if that means expanding in the future will be complicated, then so be it. I'd rather have a compact dual-3090 machine and have to use retimers and an external mounting solution than a massive build with dual-3090's today and additional room for two more 3090's that might never actually get utilized.
From everything I have seen, it seems that I can limit the PSU needed by capping the power usage of the 3090's with little / no performance hit and ensuring I have enough RAM to match or exceed the VRAM is preferred. With that in mind, I would usually go to a website like [pcpartpicker.com](http://pcpartpicker.com) and just start adding things that worked together and then order it all, but this is a more specialized situation and any advice or best practices from folks with experience with similar builds would be appreciated.
And, as I mentioned, I'm trying to keep costs low as I have already procured the highest cost items with the two 3090's.
Thanks in advance for your help and advice here!
| 2025-04-21T16:26:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4hmjx/budget_dual_3090_build_advice/
|
JustTooKrul
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4hmjx
| false | null |
t3_1k4hmjx
|
/r/LocalLLaMA/comments/1k4hmjx/budget_dual_3090_build_advice/
| false | false |
self
| 3 | null |
What LLM woudl you recommend for OCR?
| 19 |
I am trying to extract text from PDFs that are not really well scanned. As such, tesseract output had issues. I am wondering if any local llms provide more reliable OCR. What model(s) would you recommend I try on my Mac?
| 2025-04-21T16:46:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4i4sn/what_llm_woudl_you_recommend_for_ocr/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4i4sn
| false | null |
t3_1k4i4sn
|
/r/LocalLLaMA/comments/1k4i4sn/what_llm_woudl_you_recommend_for_ocr/
| false | false |
self
| 19 | null |
OOM while finetune LLama on T4 and A4000
| 3 |
Hi everyone,
I’m trying to fine-tune the LLaMA 3.2-1B model for a scientific summarization task, but I keep running into out-of-memory (OOM) issues — even when using a T4 on Colab *and* an rent A4000 GPU. 😓
Initially, I set the max sequence length to 1024, but even reducing it to 512 still causes OOM. So I suspect the problem might be in my code or training configuration.
I’ve included a snippet of the relevant parts below. If anyone has ideas or suggestions, I’d really appreciate your help!
Thanks in advance 🙏
def setup_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_alpha=16,
use_gradient_checkpointing="unsloth"
):
print(f"Setting up PEFT model with r={r}, lora_alpha={lora_alpha}")
model = FastLanguageModel.get_peft_model(
model,
r=r,
target_modules=target_modules,
lora_alpha=lora_alpha,
lora_dropout=0, # Optimized setting
bias="none", # Optimized setting
use_gradient_checkpointing=use_gradient_checkpointing,
random_state=3407,
use_rslora=False,
loftq_config=None
)
print("PEFT model setup complete")
return model
def get_training_args(
output_dir="outputs",
per_device_train_batch_size=2,
gradient_accumulation_steps=16,
warmup_steps=5,
learning_rate=2e-4,
num_train_epochs=4,
save_steps=100,
eval_steps=100
):
return TrainingArguments(
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
warmup_steps=warmup_steps,
learning_rate=learning_rate,
num_train_epochs=num_train_epochs,
fp16=not torch.cuda.is_bf16_supported(),
bf16=torch.cuda.is_bf16_supported(),
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="linear",
seed=3407,
output_dir=output_dir,
report_to="none", # "none" for console logs; use "tensorboard" or "wandb" for visual logging
logging_steps=10,
logging_strategy="steps",
evaluation_strategy="steps",
save_strategy="steps",
save_steps=save_steps,
eval_steps=eval_steps,
load_best_model_at_end=True,
save_only_model=False
)
def setup_trainer(
model,
tokenizer,
train_dataset,
val_dataset,
compute_metrics,
training_args,
max_seq_length=1024
):
trainer = SFTTrainer(
model=model,
processing_class=tokenizer,
train_dataset=train_dataset,
eval_dataset=val_dataset,
dataset_text_field="text", # Full chat-formatted prompt
max_seq_length=max_seq_length,
dataset_num_proc=2,
packing=False,
compute_metrics=compute_metrics,
args=training_args
)
return trainer
| 2025-04-21T17:01:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4iinw/oom_while_finetune_llama_on_t4_and_a4000/
|
ChimSau19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4iinw
| false | null |
t3_1k4iinw
|
/r/LocalLLaMA/comments/1k4iinw/oom_while_finetune_llama_on_t4_and_a4000/
| false | false |
self
| 3 | null |
[OC] Using LLMs for Geocoding? Trying to geographically map the top 1k posts from r/earthporn in a more automated way.
| 1 | 2025-04-21T17:16:36 |
https://github.com/kikkia/earthporn-map
|
Kikkia
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4iwpj
| false | null |
t3_1k4iwpj
|
/r/LocalLLaMA/comments/1k4iwpj/oc_using_llms_for_geocoding_trying_to/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'UH7gm5X1PXidqFv68cl53NyTF-6HvKHnpfYpMCMHruw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=108&crop=smart&auto=webp&s=1e43c2ac57dc1cea6cf8a757db40c80a4dc3151e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=216&crop=smart&auto=webp&s=b5dafeadda8a54dc836c0cddb2e6c36821544bd8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=320&crop=smart&auto=webp&s=cde37eb5820485a841542519bc4951b55d1656e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=640&crop=smart&auto=webp&s=aef35c486befb8a797e4d8e223fffd0102ae271d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=960&crop=smart&auto=webp&s=47b80eb28f8d19a5fe155d3934540114a89fa626', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?width=1080&crop=smart&auto=webp&s=9bdf1cd2d8c8813269d96984958d1644bc612635', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dynQFF7VT1N-MIa7bq3I6AouqYrjXbRbGj8qC2ufDhk.jpg?auto=webp&s=55ab8c17c304719f3f9949094a10b13f65400c86', 'width': 1200}, 'variants': {}}]}
|
||
Llama 3 local install starts hallucinating full conversations on first prompt — anyone experienced this?
| 1 |
[removed]
| 2025-04-21T17:26:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4j5ne/llama_3_local_install_starts_hallucinating_full/
|
Emergency_Phrase_734
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4j5ne
| false | null |
t3_1k4j5ne
|
/r/LocalLLaMA/comments/1k4j5ne/llama_3_local_install_starts_hallucinating_full/
| false | false |
self
| 1 | null |
Don’t Trust This Woman — She Keeps Lying
| 1 |
[removed]
| 2025-04-21T17:31:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ja0k/dont_trust_this_woman_she_keeps_lying/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ja0k
| false | null |
t3_1k4ja0k
|
/r/LocalLLaMA/comments/1k4ja0k/dont_trust_this_woman_she_keeps_lying/
| false | false | 1 | null |
|
Do Not Trust This Woman — she keeps lying
| 1 |
[removed]
| 2025-04-21T17:36:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4jefh/do_not_trust_this_woman_she_keeps_lying/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4jefh
| false | null |
t3_1k4jefh
|
/r/LocalLLaMA/comments/1k4jefh/do_not_trust_this_woman_she_keeps_lying/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Mr8KuptnZLwaDlXDQEHqnWL2Cks2T1SSEcOXIRYSaUE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=108&crop=smart&auto=webp&s=0205dd2bac43ef6be294e4bf361cc588c6de2f7a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=216&crop=smart&auto=webp&s=e7ff663bf0a346d3051c986e75b3f9bf510dfdcd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=320&crop=smart&auto=webp&s=aa0b22092f1f016f5bd73c7390d46623145ece6a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=640&crop=smart&auto=webp&s=8ce58b715ab592dac96b79cdeb76a24f8197ba4a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=960&crop=smart&auto=webp&s=764105c3d9a17cdc96de3d5206d837ada09ef3b4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?width=1080&crop=smart&auto=webp&s=81f8d0282945d91d8df316ba899850385457fc3f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cNPJ9JzkNzXIHL1LzgoSQtIcvM8XYyAEnoMUNgfGmZI.jpg?auto=webp&s=91547cc88e84b7702ce3c7eaa31b22c116e5b5ac', 'width': 1200}, 'variants': {}}]}
|
|
Concerns about repeated false model release announcements
| 1 |
[removed]
| 2025-04-21T17:41:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4jjgc/concerns_about_repeated_false_model_release/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4jjgc
| false | null |
t3_1k4jjgc
|
/r/LocalLLaMA/comments/1k4jjgc/concerns_about_repeated_false_model_release/
| false | false | 1 | null |
|
Recommendation: Open-Source and power-use oriented frontend for local and hosted models?
| 1 |
[removed]
| 2025-04-21T17:49:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4jqof/recommendation_opensource_and_poweruse_oriented/
|
bananalabamab
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4jqof
| false | null |
t3_1k4jqof
|
/r/LocalLLaMA/comments/1k4jqof/recommendation_opensource_and_poweruse_oriented/
| false | false |
self
| 1 | null |
Don’t Trust This Woman — She Keeps Lying
| 1 |
[removed]
| 2025-04-21T17:49:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4jraf/dont_trust_this_woman_she_keeps_lying/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4jraf
| false | null |
t3_1k4jraf
|
/r/LocalLLaMA/comments/1k4jraf/dont_trust_this_woman_she_keeps_lying/
| false | false | 1 | null |
|
Don’t Trust This Woman — She Keeps Lying
| 334 |
[Qwen Official Denial](https://preview.redd.it/j64rtjys78we1.png?width=1080&format=png&auto=webp&s=3dad28af7dddad7a111f8585f646fa0fb66940fc)
[New Deepseek Rumor](https://preview.redd.it/9e6t4n4x78we1.png?width=1192&format=png&auto=webp&s=31e0c2ca15b925b8d06dfede2548d4c375505196)
| 2025-04-21T17:53:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4juhd/dont_trust_this_woman_she_keeps_lying/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4juhd
| false | null |
t3_1k4juhd
|
/r/LocalLLaMA/comments/1k4juhd/dont_trust_this_woman_she_keeps_lying/
| false | false | 334 | null |
|
What exactly happens if a model is to big for your local hardware?
| 1 |
[removed]
| 2025-04-21T18:08:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4k90r/what_exactly_happens_if_a_model_is_to_big_for/
|
Aggressive-Spinach98
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4k90r
| false | null |
t3_1k4k90r
|
/r/LocalLLaMA/comments/1k4k90r/what_exactly_happens_if_a_model_is_to_big_for/
| false | false |
self
| 1 | null |
Please help me understand Chat Templates
| 1 |
[removed]
| 2025-04-21T18:15:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4kfk5/please_help_me_understand_chat_templates/
|
Aggressive-Spinach98
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4kfk5
| false | null |
t3_1k4kfk5
|
/r/LocalLLaMA/comments/1k4kfk5/please_help_me_understand_chat_templates/
| false | false |
self
| 1 | null |
FULL LEAKED VSCode/Copilot Agent System Prompts and Internal Tools
| 3 |
(Latest system prompt: 21/04/2025)
I managed to get the full official VSCode/Copilot Agent system prompts, including its internal tools (JSON). Over 400 lines. Definitely worth to take a look.
You can check it out at: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
| 2025-04-21T18:27:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4kpra/full_leaked_vscodecopilot_agent_system_prompts/
|
Independent-Box-898
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4kpra
| false | null |
t3_1k4kpra
|
/r/LocalLLaMA/comments/1k4kpra/full_leaked_vscodecopilot_agent_system_prompts/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'VRAO83vGVJXtluxed1BFzvixN2qGKC7L32ZeshDZMos', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=108&crop=smart&auto=webp&s=7535407993760a83e2456559bba9b6a4faf41a34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=216&crop=smart&auto=webp&s=078be664d0db45fa24f463e758863b581505768e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=320&crop=smart&auto=webp&s=485e1e24041a13b617a0ce25f37ef8be854e8bb2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=640&crop=smart&auto=webp&s=6cefeb2b2ccf9f0f22d0fb357f074d717a88fe8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=960&crop=smart&auto=webp&s=db69135df5653d3f81084a7f2ecdff5a6b28248d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?width=1080&crop=smart&auto=webp&s=6a9316de1e6a15828b2d3f9c86d508d43cb19473', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z_2KT_C42CNB9NYUk4Vb7arH0jmkQGd9kkEcDlFEanU.jpg?auto=webp&s=57720dc3e1818d3ad7a74004aa13e59d89d5c4d7', 'width': 1200}, 'variants': {}}]}
|
Ollama versus llama.cpp, newbie question
| 1 |
I have only ever used ollama to run llms. What advantages does llama.cpp have over ollama if you don't want to do any training.
| 2025-04-21T18:30:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4kt8q/ollama_versus_llamacpp_newbie_question/
|
OnceMoreOntoTheBrie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4kt8q
| false | null |
t3_1k4kt8q
|
/r/LocalLLaMA/comments/1k4kt8q/ollama_versus_llamacpp_newbie_question/
| false | false |
self
| 1 | null |
Any LOCAL tool Which will create AUTO captions from video and edit like this ?
| 1 |
[auto captions like this ?](https://preview.redd.it/hqyi35p8f8we1.png?width=1126&format=png&auto=webp&s=7dd8c9120428032796822454469a3bbd61ccd2ba)
what AI model or tool available which i can use ? or how i can create it locally ?
| 2025-04-21T18:33:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4kw51/any_local_tool_which_will_create_auto_captions/
|
jadhavsaurabh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4kw51
| false | null |
t3_1k4kw51
|
/r/LocalLLaMA/comments/1k4kw51/any_local_tool_which_will_create_auto_captions/
| false | false | 1 | null |
|
Advices, tips for advanced coding with LLM's
| 1 |
[removed]
| 2025-04-21T18:41:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4l2lm/advices_tips_for_advanced_coding_with_llms/
|
North-Regular-3256
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4l2lm
| false | null |
t3_1k4l2lm
|
/r/LocalLLaMA/comments/1k4l2lm/advices_tips_for_advanced_coding_with_llms/
| false | false |
self
| 1 | null |
Root ex Machina: Toward a Discursive Paradigm for Agent-Based Systems
| 1 |
[removed]
| 2025-04-21T18:43:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4l53p/root_ex_machina_toward_a_discursive_paradigm_for/
|
kotjiac
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4l53p
| false | null |
t3_1k4l53p
|
/r/LocalLLaMA/comments/1k4l53p/root_ex_machina_toward_a_discursive_paradigm_for/
| false | false |
self
| 1 | null |
Nari Labs 1.6B TTS model for dialog style conversations
| 1 |
[removed]
| 2025-04-21T18:58:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4lhy3/nari_labs_16b_tts_model_for_dialog_style/
|
goodvibes_42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4lhy3
| false | null |
t3_1k4lhy3
|
/r/LocalLLaMA/comments/1k4lhy3/nari_labs_16b_tts_model_for_dialog_style/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'l1Am1zAvsA1u7PHeNoULlqf6mh3-HFPGGhwfGvLXRFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=108&crop=smart&auto=webp&s=2bf2dc413e4e53810155f66ca3d6d010881df435', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=216&crop=smart&auto=webp&s=4f1ee14d61f972c45e85f352a9334dbf60cd79f7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=320&crop=smart&auto=webp&s=cdbb97985ad452da9c797024099170584b83e494', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=640&crop=smart&auto=webp&s=1e2dcb87c0e5bd03cfe8ada701fd599517435c50', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=960&crop=smart&auto=webp&s=038dc73e1574f402d54a657c0d01e3132997a47a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?width=1080&crop=smart&auto=webp&s=fdbe33bf972de69732ae69104c8675ab7a8fb2f4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/UUgr_RcrfztfpqSCitvPJv2vtxjmJPUEDZO1BFQiGvg.jpg?auto=webp&s=5303b476b7e2196acc48bd64f4406ac3a9935e1e', 'width': 1200}, 'variants': {}}]}
|
btw , guys, what happened to LCM (Large Concept Model by Meta)?
| 1 |
[removed]
| 2025-04-21T18:58:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4lijn/btw_guys_what_happened_to_lcm_large_concept_model/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4lijn
| false | null |
t3_1k4lijn
|
/r/LocalLLaMA/comments/1k4lijn/btw_guys_what_happened_to_lcm_large_concept_model/
| false | false |
self
| 1 | null |
Open Codex - OpenAI Codex CLI with Open Source LLMs
| 1 |
[removed]
| 2025-04-21T19:00:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4lklo/open_codex_openai_codex_cli_with_open_source_llms/
|
Unlikely_Yoghurt_121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4lklo
| false | null |
t3_1k4lklo
|
/r/LocalLLaMA/comments/1k4lklo/open_codex_openai_codex_cli_with_open_source_llms/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'bW4DkmW6-u9i861zJHNaWHoQhq0SvTBWIiDd8ZxCczM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=108&crop=smart&auto=webp&s=605a991fea203ae519a862487c1fe7b709400a5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=216&crop=smart&auto=webp&s=17a8e58a6a82dd3176c1e45908ebca03f01eabc8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=320&crop=smart&auto=webp&s=a2f1116ba6f7142cb46663d103378174bafe030e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=640&crop=smart&auto=webp&s=76519f443e80c6c2940e9036358c752ea207b304', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=960&crop=smart&auto=webp&s=8d27a1c81e46841c8512fe2640273a4d3543de4b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?width=1080&crop=smart&auto=webp&s=f5214ca7fe6be6e0a5cbb38106b6c23eb344fd9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vM79DP6yYajG_zR2MNCJuvM7zsgxnV-MDi1x2V4tQ7A.jpg?auto=webp&s=fc57d3b77c82f65168c87ffce63c8776b1a9e20d', 'width': 1200}, 'variants': {}}]}
|
A new TTS model capable of generating ultra-realistic dialogue
| 741 | 2025-04-21T19:02:56 |
https://github.com/nari-labs/dia
|
aadoop6
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4lmil
| false | null |
t3_1k4lmil
|
/r/LocalLLaMA/comments/1k4lmil/a_new_tts_model_capable_of_generating/
| false | false | 741 |
{'enabled': False, 'images': [{'id': '5tzSDS7Cu7WmpF2f03uv3UBNPUJ-K-LnJ5_5ie1ZNf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=108&crop=smart&auto=webp&s=019cb7fa7296091ebede8514e483a64e95a1a184', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=216&crop=smart&auto=webp&s=a68787f3721fc47035ed60e197d3c9d2657054e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=320&crop=smart&auto=webp&s=40d63ad5efe0501b985befbd7f223ab1cb1e9b29', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=640&crop=smart&auto=webp&s=5f1d15a76610dd0dbe8a436684ca2985b2cc492b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=960&crop=smart&auto=webp&s=688f9a2390cc96f0f5e2d477fac1e8ec610e685b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=1080&crop=smart&auto=webp&s=bafe878e5e7a2cbad7d38a27586a2c5a245e605d', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?auto=webp&s=e632c456edeb4c002c762709120ec9f8b214e10e', 'width': 1280}, 'variants': {}}]}
|
||
KoboldCPP and Silly Tavern, afterwards games run like trash.
| 2 |
I don't know a lot about the inner workings of these things, but even though I completely close out of everything my games will still run like trash after a session of running an AI locally until I restart.
Can someone offer insight into what exactly is going on? Is this expected behavior?
| 2025-04-21T19:14:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4lxao/koboldcpp_and_silly_tavern_afterwards_games_run/
|
TheGuardianFox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4lxao
| false | null |
t3_1k4lxao
|
/r/LocalLLaMA/comments/1k4lxao/koboldcpp_and_silly_tavern_afterwards_games_run/
| false | false |
self
| 2 | null |
Here is the HUGE Ollama main dev contribution to llamacpp :)
| 106 |
Less than 100 lines of code 🤡
If you truly want to support open source LLM space, use anything else than ollama specily if you have an AMD GPU, you loose way to much performance in text generation using ROCm with ollama.
https://preview.redd.it/6979nmxwm8we1.png?width=2020&format=png&auto=webp&s=91e49f15bee12d308716de607ce6763b8e1870b3
| 2025-04-21T19:21:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4m3az/here_is_the_huge_ollama_main_dev_contribution_to/
|
Nexter92
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4m3az
| false | null |
t3_1k4m3az
|
/r/LocalLLaMA/comments/1k4m3az/here_is_the_huge_ollama_main_dev_contribution_to/
| false | false | 106 | null |
|
Reasonable to use an LLM model to normalize Json property names?
| 0 |
I'm working on a project involving json objects created from arbitrary input by humans. I have normalized property names using regex, but would like to consolidate synonyms. I may have 3 objects containing the same type of data but that data's key be abbreviated differently or a different word used.
In the good old days, we just create data scheme standards and force people to live within those standards.
I've messed around with llama 3.3 70b and a couple of other models with no good success. So far.
My prompt is:
```
messages=[
{
"role": "system",
"content": "Act like a program that normalizes json property names"
},
{
"role": "user",
"content": json_str
}
],
```
I generally feed it 30 objects in an array which comes out to roughly 35000-45000 tokens.
Any opinions on if this is a bad application of an LLM, what models to try, or how to get started is much appreciated.
One alternate approach I could take is passing it a lost of property names rather than expect it to work directly on the json. I just thought it would be really neat if I could find a model that will work directly on json objects.
Thanks for any help!
| 2025-04-21T19:52:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4mv6o/reasonable_to_use_an_llm_model_to_normalize_json/
|
bdeetz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4mv6o
| false | null |
t3_1k4mv6o
|
/r/LocalLLaMA/comments/1k4mv6o/reasonable_to_use_an_llm_model_to_normalize_json/
| false | false |
self
| 0 | null |
"Best" LLM
| 2 |
I was looking at the Ollama list of models and it is a bit of a pain to pull out what the models do. I know there is no "Best" LLM at everything. But is there a chart that addresses which LLM performs better in different scenarios? One may be better at image generation, another understanding documents or another maybe better at ansering questions. I am looking to see both out of the box training and subsequent additional training.
| 2025-04-21T19:56:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4myfo/best_llm/
|
marketlurker
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4myfo
| false | null |
t3_1k4myfo
|
/r/LocalLLaMA/comments/1k4myfo/best_llm/
| false | false |
self
| 2 | null |
What's the best mobile handset for donkeying with LLMs atm?
| 0 |
My trusty pixel just died. I've been putting off upgrading it because it had the finger print sensor on the rear for easy unlock which Google discontinued, it seems.
Only requirements are great camera and... shitloads of RAM?
| 2025-04-21T20:08:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4n8rc/whats_the_best_mobile_handset_for_donkeying_with/
|
Secure_Reflection409
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4n8rc
| false | null |
t3_1k4n8rc
|
/r/LocalLLaMA/comments/1k4n8rc/whats_the_best_mobile_handset_for_donkeying_with/
| false | false |
self
| 0 | null |
GMK Evo-X2 versus Framework Desktop versus Mac Studio M3 Ultra
| 2 |
Which would you buy for LocalLLaMA? I'm partial to the GMK Evo-X2 and the Mac Studio M3 Ultra. GMK has a significant discount for preorders, but I've never used GMK products. Apple's Mac Studio is a fine machine that gives you the Mac ecosystem, but is double the price.
I'm thinking of selling my 4090 and buying one of these machines.
| 2025-04-21T20:09:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4na8t/gmk_evox2_versus_framework_desktop_versus_mac/
|
dylan_dev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4na8t
| false | null |
t3_1k4na8t
|
/r/LocalLLaMA/comments/1k4na8t/gmk_evox2_versus_framework_desktop_versus_mac/
| false | false |
self
| 2 | null |
Newbie to Local LLM - help me improve model performance
| 1 |
[removed]
| 2025-04-21T20:18:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4nhux/newbie_to_local_llm_help_me_improve_model/
|
Askmasr_mod
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4nhux
| false | null |
t3_1k4nhux
|
/r/LocalLLaMA/comments/1k4nhux/newbie_to_local_llm_help_me_improve_model/
| false | false |
self
| 1 | null |
OpenWebui question regarding Website presentation
| 1 |
Sometimes.. clearly not every time when creating HTML via Openwebui i get a live preview window?
What is it called and how do i ask the model to always include it?
| 2025-04-21T20:25:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4no88/openwebui_question_regarding_website_presentation/
|
Timziito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4no88
| false | null |
t3_1k4no88
|
/r/LocalLLaMA/comments/1k4no88/openwebui_question_regarding_website_presentation/
| false | false |
self
| 1 | null |
ollama problems and alternatives
| 1 |
[removed]
| 2025-04-21T20:36:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ny2a/ollama_problems_and_alternatives/
|
conradkun
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ny2a
| false | null |
t3_1k4ny2a
|
/r/LocalLLaMA/comments/1k4ny2a/ollama_problems_and_alternatives/
| false | false |
self
| 1 | null |
Looking for uncensored Cogito
| 0 |
Anyone done or used some fine tunes of the Cogito line? Hoping for a decent 8b
| 2025-04-21T20:36:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ny6m/looking_for_uncensored_cogito/
|
NotTheTitanic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ny6m
| false | null |
t3_1k4ny6m
|
/r/LocalLLaMA/comments/1k4ny6m/looking_for_uncensored_cogito/
| false | false |
self
| 0 | null |
What's my best option to run an LLM if I can't do it locally?
| 1 |
[removed]
| 2025-04-21T20:40:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4o1vp/whats_my_best_option_to_run_an_llm_if_i_cant_do/
|
yetiflask
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4o1vp
| false | null |
t3_1k4o1vp
|
/r/LocalLLaMA/comments/1k4o1vp/whats_my_best_option_to_run_an_llm_if_i_cant_do/
| false | false |
self
| 1 | null |
So I have an ARM VPS. What would be the best way to squeeze all the tokens I can from it?
| 1 |
I have an ARM VPS on Netcup with 8GB of RAM.
Tried a few 1-3B models on it via ollama and they do run fine but I'd like to see if I can squeeze more out of it, especially since I'm using tool calling, which makes it a bit slower in action with my WIP desktop app.
Anything I can do to improve performance with models in this size range? While still having support for tool calling using an OpenAI compatible API?
| 2025-04-21T20:42:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4o34f/so_i_have_an_arm_vps_what_would_be_the_best_way/
|
Devatator_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4o34f
| false | null |
t3_1k4o34f
|
/r/LocalLLaMA/comments/1k4o34f/so_i_have_an_arm_vps_what_would_be_the_best_way/
| false | false |
self
| 1 | null |
I have been looking to host an local MSTeams notetaker... Where are they?!
| 1 |
I see a lot of AI notetaking services but no local hosted opensource, are you guys keeping a secret from me?
Best regards
Tim
| 2025-04-21T20:58:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4oh9i/i_have_been_looking_to_host_an_local_msteams/
|
Timziito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4oh9i
| false | null |
t3_1k4oh9i
|
/r/LocalLLaMA/comments/1k4oh9i/i_have_been_looking_to_host_an_local_msteams/
| false | false |
self
| 1 | null |
Skywork releases SkyReels-V2 - unlimited duration video generation model
| 160 |
Available in 1.3B and 14B, these models allow us to generate Infinite-Length videos.
They support both text-to-video (T2V) and image-to-video (I2V)tasks.
According to the benchmarks shared in model’s card, SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B.
Paper: https://huggingface.co/papers/2504.13074
Models: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9
All-in-one creator toolkit and guide: https://x.com/ai_for_success/status/1914159352812036463?s=46
| 2025-04-21T21:09:27 |
https://www.reddit.com/gallery/1k4oqpi
|
ResearchCrafty1804
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4oqpi
| false | null |
t3_1k4oqpi
|
/r/LocalLLaMA/comments/1k4oqpi/skywork_releases_skyreelsv2_unlimited_duration/
| false | false | 160 | null |
|
Orpheus-TTS local speech synthesizer in C#
| 24 |
[Repo](https://github.com/TheAjaykrishnanR/TaraSharp)
* No python dependencies
* No LM Studio
* Should work out of the box
Uses LlamaSharp (llama.cpp) backend for inference and TorchSharp for decoding. Requires .NET 9 and Cuda 12.
| 2025-04-21T21:13:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4oufe/orpheustts_local_speech_synthesizer_in_c/
|
ajpy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4oufe
| false | null |
t3_1k4oufe
|
/r/LocalLLaMA/comments/1k4oufe/orpheustts_local_speech_synthesizer_in_c/
| false | false |
self
| 24 | null |
Meta Perception Language Model: Enhancing Understanding of Visual Perception Tasks
| 136 |
Continuing their work on perception, Meta is releasing the Perception Language Model (PLM), an open and reproducible vision-language model designed to tackle challenging visual recognition tasks.
Meta trained PLM using synthetic data generated at scale and open vision-language understanding datasets, without any distillation from external models. They then identified key gaps in existing data for video understanding and collected 2.5 million new, human-labeled fine-grained video QA and spatio-temporal caption samples to fill these gaps, forming the largest dataset of its kind to date.
PLM is trained on this massive dataset, using a combination of human-labeled and synthetic data to create a robust, accurate, and fully reproducible model. PLM offers variants with 1, 3, and 8 billion parameters, making it well suited for fully transparent academic research.
Meta is also sharing a new benchmark, PLM-VideoBench, which focuses on tasks that existing benchmarks miss: fine-grained activity understanding and spatiotemporally grounded reasoning. It is hoped that their open and large-scale dataset, challenging benchmark, and strong models together enable the open source community to build more capable computer vision systems.
[Download the model](https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498)
[Download the code](https://github.com/facebookresearch/perception_models)
[Download the dataset](https://ai.meta.com/datasets/plm-data/)
[Read the paper](https://ai.meta.com/research/publications/perceptionlm-open-access-data-and-models-for-detailed-visual-understanding/)
| 2025-04-21T21:14:47 |
https://v.redd.it/5n4izmqm79we1
|
ninjasaid13
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ov9e
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5n4izmqm79we1/DASHPlaylist.mpd?a=1747862106%2CMmU2ODQxZDU5MTM1M2Q3NDlhMmQ3Y2NkZDQ5ODJmN2IzZWJiYWFlYTg5YmM5OWJmNWNmMWVhNGQxOTk5NmQ4NA%3D%3D&v=1&f=sd', 'duration': 71, 'fallback_url': 'https://v.redd.it/5n4izmqm79we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5n4izmqm79we1/HLSPlaylist.m3u8?a=1747862106%2CNDRjMTk1NjE1ZGE3YTBmYmMzNjQ0Yjg2NzFhZmVlZDRhYjUzY2U3OGRlNjY2ZGIxNmEzMzBjYjU5ODA1ZmIyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5n4izmqm79we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k4ov9e
|
/r/LocalLLaMA/comments/1k4ov9e/meta_perception_language_model_enhancing/
| false | false | 136 |
{'enabled': False, 'images': [{'id': 'dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=108&crop=smart&format=pjpg&auto=webp&s=9e5431dedbb0e84862ea3cd8757a453272231257', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=216&crop=smart&format=pjpg&auto=webp&s=69e5524d4673b719c65fbd2c959a4938258018c4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=320&crop=smart&format=pjpg&auto=webp&s=2e51ae6848fe3fac6e460dc1b7d7f99b64e5b0f9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=640&crop=smart&format=pjpg&auto=webp&s=14e98db12f8d08565182028439b287f6de5415ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=960&crop=smart&format=pjpg&auto=webp&s=1a2190ee8afa8c844b8a417bc60cdcaa9f34cb70', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?width=1080&crop=smart&format=pjpg&auto=webp&s=de881ec76a9e3560ab16e606aad5f5fdfb9314c1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXlleHRwcW03OXdlMTJfwlZ1QfIuL9mmXOeUB99y5PuEqD7QQlGvCc8SfvTb.png?format=pjpg&auto=webp&s=42bebaf7462ee85a32da596e93706ae32f81d334', 'width': 1920}, 'variants': {}}]}
|
|
True local equivalent of ChatGPT possible?
| 1 |
[removed]
| 2025-04-21T21:18:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4oyk2/true_local_equivalent_of_chatgpt_possible/
|
Existing_Shoe_5594
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4oyk2
| false | null |
t3_1k4oyk2
|
/r/LocalLLaMA/comments/1k4oyk2/true_local_equivalent_of_chatgpt_possible/
| false | false |
self
| 1 | null |
HyperAgent: open-source Browser Automation with LLMs
| 45 |
Excited to show you **HyperAgent**, a wrapper around Playwright that lets you control pages with LLMs.
With HyperAgent, you can run functions like:
await page.ai("search for noise-cancelling headphones under $100 and click the best option");
or
const data = await page.ai(
"Give me the director, release year, and rating for 'The Matrix'",
{
outputSchema: z.object({
director: z.string().describe("The name of the movie director"),
releaseYear: z.number().describe("The year the movie was released"),
rating: z.string().describe("The IMDb rating of the movie"),
}),
}
);
We built this because automation is still too brittle and manual. HTML keeps changing and selectors break constantly, Writing full automation scripts is overkill for quick one-offs. Also, and possibly most importantly, AI Agents need some way to interact with the web with natural language.
Excited to see what you all think! We are rapidly adding new features so would love any ideas for how we can make this better :)
| 2025-04-21T21:29:40 |
https://github.com/hyperbrowserai/HyperAgent
|
LawfulnessFlat9560
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4p8a1
| false | null |
t3_1k4p8a1
|
/r/LocalLLaMA/comments/1k4p8a1/hyperagent_opensource_browser_automation_with_llms/
| false | false | 45 |
{'enabled': False, 'images': [{'id': 'e1deWjEum1fkQhkRcsPFr7y9bAIieDd75N3gBtIIjec', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=108&crop=smart&auto=webp&s=a0d4babcb09afc4bee6f9e88e0d5be9d391e3101', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=216&crop=smart&auto=webp&s=59e9ccb8a424fe802e51432b3dd6479e3f7dc279', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=320&crop=smart&auto=webp&s=e915c374eec4d7813e24120cc179d0ff1817edc9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=640&crop=smart&auto=webp&s=3492e14aad472fd9e73368bb9ecf897800ec09b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=960&crop=smart&auto=webp&s=0fd3ff70a0575ccb2c244df4a9084814b65a255a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?width=1080&crop=smart&auto=webp&s=e86f150ae7d61deb5b7feb8e1d3f9cca9d48f8c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pG2fXTePiHCTWQNdBrjC6ZrKGhk2WsgN52b6QusON9Q.jpg?auto=webp&s=33bb80f863caa765f48bdf4cfd375b468c361ceb', 'width': 1200}, 'variants': {}}]}
|
|
Which are the current SOTA models at being as offensive as possible?
| 0 |
For a change instead of being good at code or maths hey!
I'm looking to experiment around and am curious of which models you would judge to be the current best at such tasks.
| 2025-04-21T21:47:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4pnsq/which_are_the_current_sota_models_at_being_as/
|
Extraaltodeus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4pnsq
| false | null |
t3_1k4pnsq
|
/r/LocalLLaMA/comments/1k4pnsq/which_are_the_current_sota_models_at_being_as/
| false | false |
self
| 0 | null |
Why do we keep seeing new models trained from scratch?
| 4 |
When I first read about the concept of foundation models, I thought that soon we'd just have a couple of good foundation models and that all further models would come from extra post-training methods (save for any major algorithmic breakthroughs).
Why is that not the case? Why do we keep seeing new models pop up that have again been trained from scratch with billions or trillions of tokens? Or at least, that's what I believe I'm seeing, but I could be wrong.
| 2025-04-21T22:18:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4qd8o/why_do_we_keep_seeing_new_models_trained_from/
|
live_love_laugh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4qd8o
| false | null |
t3_1k4qd8o
|
/r/LocalLLaMA/comments/1k4qd8o/why_do_we_keep_seeing_new_models_trained_from/
| false | false |
self
| 4 | null |
Copilot Workspace being underestimated...
| 11 |
I've recently been using Copilot Workspace (link in comments), which is in technical preview. I'm not sure why it is not being mentioned more in the dev community. It think this product is the natural evolution of localdev tools such as Cursor, Claude Code, etc.
As we gain more trust in coding agents, it makes sense for them to gain more autonomy and leave your local dev. They should handle e2e tasks like a co-dev would do. Well, Copilot Workspace is heading that direction and it works super well.
My experience so far is exactly what I expect for an AI co-worker. It runs cloud, it has access to your repo and it open PRs automatically. You have this thing called "sessions" where you do follow up on a specific task.
I wonder why this has been in preview since Nov 2024. Has anyone tried it? Thoughts?
| 2025-04-21T22:39:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4quhi/copilot_workspace_being_underestimated/
|
itzco1993
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4quhi
| false | null |
t3_1k4quhi
|
/r/LocalLLaMA/comments/1k4quhi/copilot_workspace_being_underestimated/
| false | false |
self
| 11 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.