title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fine Tuning a model using Exo?
| 1 |
[removed]
| 2025-03-31T11:22:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo0un3/fine_tuning_a_model_using_exo/
|
chocochocoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo0un3
| false | null |
t3_1jo0un3
|
/r/LocalLLaMA/comments/1jo0un3/fine_tuning_a_model_using_exo/
| false | false |
self
| 1 | null |
RTX PRO 6000 Blackwell 96GB shows up at 7623€ before VAT (8230 USD)
| 99 |
[https:\/\/www.proshop.fi\/Naeytoenohjaimet\/NVIDIA-RTX-PRO-6000-Blackwell-Bulk-96GB-GDDR7-RAM-Naeytoenohjaimet\/3358883](https://preview.redd.it/cgpfkci6e0se1.jpg?width=868&format=pjpg&auto=webp&s=8fbbd40cc6fe111c3913c2bb4f76d623a6ae9a02)
Proshop is a decently sized retailer and Nvidia's partner for selling Founders Edition cards in several European countries so the listing is definitely legit.
NVIDIA RTX PRO 5000 Blackwell 48GB listed at 4130€ + some more listings for those curious:
[https://www.proshop.fi/?s=rtx+pro+blackwell&o=2304](https://www.proshop.fi/?s=rtx+pro+blackwell&o=2304)
| 2025-03-31T11:25:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo0wf9/rtx_pro_6000_blackwell_96gb_shows_up_at_7623/
|
rerri
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo0wf9
| false | null |
t3_1jo0wf9
|
/r/LocalLLaMA/comments/1jo0wf9/rtx_pro_6000_blackwell_96gb_shows_up_at_7623/
| false | false | 99 | null |
|
Right prompt/model to transform a text (changing POV)
| 1 |
[removed]
| 2025-03-31T11:40:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo1579/right_promptmodel_to_transform_a_text_changing_pov/
|
AMPosts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo1579
| false | null |
t3_1jo1579
|
/r/LocalLLaMA/comments/1jo1579/right_promptmodel_to_transform_a_text_changing_pov/
| false | false |
self
| 1 | null |
I'm building extension that gets you free and unlimited usage of Gemini 2.5 Pro
| 0 | 2025-03-31T11:53:44 |
https://v.redd.it/wtvj15llk0se1
|
robertpiosik
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo1d2k
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/wtvj15llk0se1/DASHPlaylist.mpd?a=1746014039%2CYzZiNmU5ZDYzODQyNWU3YjM3Y2YyOTI3NGEzMDM5YzQ0M2NiMTQ2OGNjMWUwNWE1ZTBhODBkMzA3YjUwZGFiNg%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/wtvj15llk0se1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/wtvj15llk0se1/HLSPlaylist.m3u8?a=1746014039%2CMzE3YjJkZmFlNThlY2U5MDQzNGI1ZjUzNzE3MDE3ZjMwOTIyN2JmOGE2ZGY3ZDBlNTRkYjMyMTA3ZWYyZThmOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wtvj15llk0se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 840}}
|
t3_1jo1d2k
|
/r/LocalLLaMA/comments/1jo1d2k/im_building_extension_that_gets_you_free_and/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=108&crop=smart&format=pjpg&auto=webp&s=da9e0dac5ad719c9036205f2515ae8f902ac3468', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=216&crop=smart&format=pjpg&auto=webp&s=476a66d5e69ea370b4efe6d21e2c85a817abe093', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=320&crop=smart&format=pjpg&auto=webp&s=ede4d9adb055ea53981e2e4915d1028e32d96079', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=640&crop=smart&format=pjpg&auto=webp&s=13ba615a2188eba3dfeb7b32122c35aee4d4478b', 'width': 640}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/OTM4b3M1bGxrMHNlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?format=pjpg&auto=webp&s=df16335bf1777ea71242ea52a9db992377eefd34', 'width': 940}, 'variants': {}}]}
|
||
Is there any work towards an interactive manga translation tool?
| 8 |
I imagine it to work with a combination of text location detection, traditional OCR and LLM based translation where each translated piece of text gets summarized and added to a running summary that is prepended to each new piece of text.
Interactive would mean that the user can edit and insert info about which character the text belongs to or whether it is just a general description, or give additional context or ask questions about the translation, alternative translations, to explain ambiguities, alter the tone and style etc.
| 2025-03-31T12:10:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo1nrb/is_there_any_work_towards_an_interactive_manga/
|
Tmmrn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo1nrb
| false | null |
t3_1jo1nrb
|
/r/LocalLLaMA/comments/1jo1nrb/is_there_any_work_towards_an_interactive_manga/
| false | false |
self
| 8 | null |
5090 Dual GPU setup
| 1 |
[removed]
| 2025-03-31T12:23:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo1w9v/5090_dual_gpu_setup/
|
EasyConference4177
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo1w9v
| false | null |
t3_1jo1w9v
|
/r/LocalLLaMA/comments/1jo1w9v/5090_dual_gpu_setup/
| false | false |
self
| 1 | null |
lmarena
| 1 |
[removed]
| 2025-03-31T12:26:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo1ygx/lmarena/
|
Acceptable_Access114
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo1ygx
| false | null |
t3_1jo1ygx
|
/r/LocalLLaMA/comments/1jo1ygx/lmarena/
| false | false |
self
| 1 | null |
lmarena
| 1 |
[removed]
| 2025-03-31T12:29:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo20mt/lmarena/
|
Acceptable_Access114
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo20mt
| false | null |
t3_1jo20mt
|
/r/LocalLLaMA/comments/1jo20mt/lmarena/
| false | false |
self
| 1 | null |
Suggestions for low latency speech to text
| 0 |
I am working on an app for my daughter who has dyslexia and a bad habit of guessing words when reading. My gut says she just needs more repitition and immediate feedback so she can learn the patterns faster. The goal of the program is for her to read the words on the screen and in realtime have it highlight the words she got right and wrong and track her stats. Words she got wrong are highlighted and then TTS will define them if she clicks them with the mouse. I have a 3090 for this project but also have an extremely low latency internet connection and network. It is crazy that I am reading blog posts and watching videos on this from 2024 and I am fairly sure they are out of date... What is the new hotness to do this in realtime with accuracy? Keep in mind, I am not sending sentences, I am sending a stream and need to stream the text back to highlight the last word as green or red. I expect to send the whole sentence at the end to verify results as well. The model needs to not correct grammar automatically, or have the behavior controlled by a temperature setting.
| 2025-03-31T13:03:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo2ovg/suggestions_for_low_latency_speech_to_text/
|
MerlinTrashMan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo2ovg
| false | null |
t3_1jo2ovg
|
/r/LocalLLaMA/comments/1jo2ovg/suggestions_for_low_latency_speech_to_text/
| false | false |
self
| 0 | null |
Something is wrong with the Gemma-3 GGUFs
| 0 |
Even with the Q4_K_M quant of Gemma-3 27B, I’m regularly seeing things like “shouldn’sh” or “won’” in the output, or the digit “0” inserted at random positions in the text (rare, but happened a few times).
Can anyone else confirm these issues? Is this a known bug?
| 2025-03-31T13:25:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo34a2/something_is_wrong_with_the_gemma3_ggufs/
|
-p-e-w-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo34a2
| false | null |
t3_1jo34a2
|
/r/LocalLLaMA/comments/1jo34a2/something_is_wrong_with_the_gemma3_ggufs/
| false | false |
self
| 0 | null |
Real Time Speech to Speech - Vocalis
| 2 |
H
| 2025-03-31T13:33:52 |
https://v.redd.it/5xkrndv121se1
|
townofsalemfangay
|
/r/LocalLLaMA/comments/1jo3aw1/real_time_speech_to_speech_vocalis/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo3aw1
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5xkrndv121se1/DASHPlaylist.mpd?a=1746149646%2CM2Q0MWZiOGFiOTM2M2NkNjIzMjk0NGQyMDNiZWVmYjc5Zjc0NDIwNzQ0MTk2OGU4NThkYjk4MzY3MGU1NDc0Zg%3D%3D&v=1&f=sd', 'duration': 185, 'fallback_url': 'https://v.redd.it/5xkrndv121se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 540, 'hls_url': 'https://v.redd.it/5xkrndv121se1/HLSPlaylist.m3u8?a=1746149646%2CNmYzY2RiNmYzMzY0OTlkMTk4YjU4Y2U5NTFiODgwOTRhYjgwMjE3NmFkNzY0NWJiZWJiOWY5N2NlYmZmNjE0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5xkrndv121se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jo3aw1
|
/r/LocalLLaMA/comments/1jo3aw1/real_time_speech_to_speech_vocalis/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f92fe7e40ce195e6813d459232373a2118f64ba', 'width': 108}, {'height': 60, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=216&crop=smart&format=pjpg&auto=webp&s=2912e6925d08b39ed9dfc5a98d9ba8343868919c', 'width': 216}, {'height': 90, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=320&crop=smart&format=pjpg&auto=webp&s=52ff3fcf3169a3b93ddc4058c98286fe468bfda1', 'width': 320}, {'height': 180, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=640&crop=smart&format=pjpg&auto=webp&s=1113c57c1685ad04262a4377bf2a67d722f13a51', 'width': 640}, {'height': 270, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=960&crop=smart&format=pjpg&auto=webp&s=ea4f52a7a37a0cb3881cdbc39197a91149c08346', 'width': 960}, {'height': 303, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f1ec7311c7681fc91870564f6804c81a0902d109', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/MnlscTZldjEyMXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?format=pjpg&auto=webp&s=f8262114b21068a6ea7e9478eb7a175617c73517', 'width': 5120}, 'variants': {}}]}
|
|
Vocalis - Local Speech-to-Speech AI Assistant with Interrupts, follow-up and contextual memory.
| 1 |
Hey r/LocalLLaMA 👋
I’ve been working on **Vocalis**—a fully local (and Opensource!) **speech-to-speech AI webapp** that communicates with your LLM/TTS using **OpenAI-compatible endpoints**. It’s designed to feel natural, fast, and human-like—all running locally with no cloud dependency. Kind of what Sesame promised us, then rugpulled.
# 🎥 In this demo, you’ll see:
* **🗣️ Real-time barge-in** I interrupt the assistant mid-sentence and it immediately stops and pivots to my new question—this happens with no awkward delay.
* **👋 Proactive greeting** The assistant greets me by name as soon as I activate the mic, without needing a prompt. It pulls this from local preferences.
* **💬 Silence-based follow-ups** If I stay silent, it knows. It naturally re-engages the conversation, with increasing warmth the longer I don’t respond. (You can modify this to happen earlier or later depending on preference, like Sesame where it's within 3 seconds, I'm using random interval of 5-8 seconds)
* **⚡ Ultra low-latency** End-to-end interaction time is between **300ms-600ms**, even with a large model setup. With smaller models, sub-300ms is very achievable. (you can also cut this via system prompt to dictate response length).
* **🎨 Visual feedback** A reactive orb animates to reflect conversation state—listening, thinking, processing, speaking, and idle.
# 📽️ Used in demo:
* **LLM:** [Gemma 27B](https://huggingface.co/google/gemma-2-27b) via LM Studio (OpenAI-like endpoint)
* **TTS:** [Koroko-FASTAPI](https://github.com/yourrepo/koroko-fastapi) (OpenAI-like endpoint)
# 🛠️ Under the hood:
* **ASR:** [Faster-Whisper Large](https://github.com/guillaumekln/faster-whisper)
* Can be swapped to `base`, `small`, or `tiny` for even faster inference—at the cost of a little accuracy
* **Frontend:** React + Vite (with WebSocket comms and audio streaming)
* **Backend:** FastAPI + local audio pipeline, session state, interrupt handling, and memory logic
All processing is **100% local**—nothing leaves the machine. If you swap in a smaller LLM (like 7B or 13B), you can push response times **well below 300ms**.
This will work out of the box as well with my other project [Orpheus-FASTAPI,](https://www.reddit.com/r/LocalLLaMA/comments/1jgopeg/orpheusfastapi_local_tts_with_8_voices_emotion/) but until the developers of the underlying model improve its performance, you're looking at about a second 600-800ms in terms of latency.
I don't have a release date as of yet other than likely sometime later this month.
If you all have any feature requests or ideas, feel free to provide them.
| 2025-03-31T13:47:43 |
https://v.redd.it/vushj06t41se1
|
townofsalemfangay
|
/r/LocalLLaMA/comments/1jo3lfi/vocalis_local_speechtospeech_ai_assistant_with/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo3lfi
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vushj06t41se1/DASHPlaylist.mpd?a=1746150467%2CMTRkODRkODA5OGIxNTk0Y2QzMWU5NDQxZDM5NzZmNzFmODNhNjFmMjE0NDRlYTFhNDdhZjQyNGQ1YzU2M2QxYw%3D%3D&v=1&f=sd', 'duration': 185, 'fallback_url': 'https://v.redd.it/vushj06t41se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 540, 'hls_url': 'https://v.redd.it/vushj06t41se1/HLSPlaylist.m3u8?a=1746150467%2CZTk0YmI3MmYzYTEyNjhhNDk5ZjQ5OTgwY2JjNmU3OTBhZTg0YWY3MGY2ZmU2MjZkZGY0MjgwYmVlZjFjZTIxNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vushj06t41se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jo3lfi
|
/r/LocalLLaMA/comments/1jo3lfi/vocalis_local_speechtospeech_ai_assistant_with/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=108&crop=smart&format=pjpg&auto=webp&s=d6bf13bf54f4b8299271e9d4e7871c16c919c2ea', 'width': 108}, {'height': 60, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=216&crop=smart&format=pjpg&auto=webp&s=9bec3b5594c46644cb6de7495a24e0a3687da950', 'width': 216}, {'height': 90, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=320&crop=smart&format=pjpg&auto=webp&s=33d2260819c2d2da167f724e66bcb5660d7687a5', 'width': 320}, {'height': 180, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=640&crop=smart&format=pjpg&auto=webp&s=9dd1e0b28487d64a3f843f3becb54ef085d8b1b7', 'width': 640}, {'height': 270, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=960&crop=smart&format=pjpg&auto=webp&s=ce894230a5fb882459e29e0effa1c9cdd75624b1', 'width': 960}, {'height': 303, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a970823447db815ba4ece9f66204a3a25ec836e5', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/cmplMjEwNnQ0MXNlMUPrzOUdiQFMM8Uh80qKwVY0xWKg990FJluGeNHEKVG6.png?format=pjpg&auto=webp&s=e15fb31caeb373daa282616be824445956161bf0', 'width': 5120}, 'variants': {}}]}
|
|
How to Use Google Gemma 3 – The Ultimate Developer’s Tutorial
| 1 | 2025-03-31T14:00:11 |
https://youtu.be/_IzgKu0xnmg?si=J-JkGM4ze7tShJHR
|
Kind-Industry-609
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo3v1k
| false |
{'oembed': {'author_name': 'proflead', 'author_url': 'https://www.youtube.com/@proflead', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/_IzgKu0xnmg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Gemma 3 Explained: Google’s Open-Source AI Beast (Full Guide)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/_IzgKu0xnmg/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Gemma 3 Explained: Google’s Open-Source AI Beast (Full Guide)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jo3v1k
|
/r/LocalLLaMA/comments/1jo3v1k/how_to_use_google_gemma_3_the_ultimate_developers/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'e6On3hGiwU_0vgjjIQkLV0-lKLz3EU-U2fpfxqEbk-Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/aF8jlvK6hnMGVHjmiFskCd8vaRJJHucPF8OB-EClEVs.jpg?width=108&crop=smart&auto=webp&s=27a8b131bcfb1535b3dbc671869ac737eceb4494', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/aF8jlvK6hnMGVHjmiFskCd8vaRJJHucPF8OB-EClEVs.jpg?width=216&crop=smart&auto=webp&s=7cef6fe6d43aa1201a75fa90ca3f099e810b0846', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/aF8jlvK6hnMGVHjmiFskCd8vaRJJHucPF8OB-EClEVs.jpg?width=320&crop=smart&auto=webp&s=cc0bbac2d5dc59383217bbbea025d85607e243d6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/aF8jlvK6hnMGVHjmiFskCd8vaRJJHucPF8OB-EClEVs.jpg?auto=webp&s=b66e4a85c09499b649987c51e8e3bd891d60c8bb', 'width': 480}, 'variants': {}}]}
|
||
Are there any Open Weights Native Image Gen on LMs?
| 11 |
Im really impressed how we are heading from INPUT MULTIMODALITY to FULL MULTIMODALITY. (Cant wait for audio gen. And possibly, Video Gen natively)
Are there any local models are trying to bring these Native Image Gen?
| 2025-03-31T14:07:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo410v/are_there_any_open_weights_native_image_gen_on_lms/
|
nojukuramu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo410v
| false | null |
t3_1jo410v
|
/r/LocalLLaMA/comments/1jo410v/are_there_any_open_weights_native_image_gen_on_lms/
| false | false |
self
| 11 | null |
Low-param reasoning model that can call external search APIs?
| 1 |
[removed]
| 2025-03-31T14:18:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo4adf/lowparam_reasoning_model_that_can_call_external/
|
Top-Guava-1302
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo4adf
| false | null |
t3_1jo4adf
|
/r/LocalLLaMA/comments/1jo4adf/lowparam_reasoning_model_that_can_call_external/
| false | false |
self
| 1 | null |
Isn't there a simpler way to run LLMs / models locally ?
| 0 |
Hi everyone,
I'm currently exploring a project idea : **create an ultra-simple tool for launching open source LLM models locally, without the hassle, and I'd like to get your feedback.**
# The current problem:
I'm not a dev or into IT or anything, but I've become fascinated by the subject of local LLMs , but running an LLM model on your own PC can be a real pain in the ass :
❌ Installation and hardware compatibility.
❌ Manual management of models and dependencies.
❌ Interfaces often not very accessible to non-developers.
❌ No all-in-one software (internet search, image generation, TTS, etc.).
❌ Difficulty in choosing the right model for one's needs, so you get the idea.
I use LM studio, which I think is the simplest, but I think you can do a lot better than that.
# The idea :
✅ A software / app that lets you install and use in 1 click, for everyone.
✅ Download and fine-tune a model easily.
✅ Automatically optimize parameters according to hardware.
✅ Create a pretty, intuitive interface.
Anyway, I have lots of other ideas but that's not the point.
# Why am I posting here?
I'm looking to **validate this idea** before embarking on MVP development, and I'd love to hear from all you LLM enthusiasts :)
* What are the biggest problems you've encountered when launching a local LLM ?
* How are you currently doing and what would you change/improve ?
* Do you see any particular use cases (personal, professional, business) ?
* What a question I didn't ask you that deserves an answer all the same ;)
I sincerely believe that current solutions can be vastly improved.
If you're curious and want to follow the evolution of the project, I'd be delighted to exchange in PM or in the comments, maybe in the future I'll be looking for early adopters! 🚀
Thanks in advance for your feedback 🙌
| 2025-03-31T14:37:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo4p9o/isnt_there_a_simpler_way_to_run_llms_models/
|
enzo_ghll
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo4p9o
| false | null |
t3_1jo4p9o
|
/r/LocalLLaMA/comments/1jo4p9o/isnt_there_a_simpler_way_to_run_llms_models/
| false | false |
self
| 0 | null |
Creator of Orpheus Here - answering questions + educational content
| 1 |
[removed]
| 2025-03-31T14:39:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo4rf6/creator_of_orpheus_here_answering_questions/
|
Lumpy_Persimmon_1661
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo4rf6
| false | null |
t3_1jo4rf6
|
/r/LocalLLaMA/comments/1jo4rf6/creator_of_orpheus_here_answering_questions/
| false | false |
self
| 1 | null |
Best Reference Resources For Choosing Local LLM?
| 3 |
Best References To Find The LLM Most Suitable For You
Half a month ago, the biggest central platform for LLM performance benchmarking, open llm leaderboard got deactivated. It brought me to think about what open resources we should refer to when we are deciding on the LLM to use in specific use case.
I will list a few from my personal experience:
Quantitative: Chatbot Arena (most popular, hard to hack but only includes very few open models), Huggingface trending list
Qualitative: LocalLlama discussion, recommendations from colleagues
Comment below for your favorite source! It would be better if it is a centralized platform where you can make easy comparisons.
| 2025-03-31T14:45:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo4vs2/best_reference_resources_for_choosing_local_llm/
|
Ok-Atmosphere3141
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo4vs2
| false | null |
t3_1jo4vs2
|
/r/LocalLLaMA/comments/1jo4vs2/best_reference_resources_for_choosing_local_llm/
| false | false |
self
| 3 | null |
Free manus account giveaway
| 0 |
I got 2 accounts it kinda feels useless to me after the hype and it isn't that capable yet so
| 2025-03-31T14:47:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo4y1s/free_manus_account_giveaway/
|
Ok-Weakness-4753
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo4y1s
| false | null |
t3_1jo4y1s
|
/r/LocalLLaMA/comments/1jo4y1s/free_manus_account_giveaway/
| false | false |
self
| 0 | null |
Framework strix halo vs Epyc 9115 -- is Epyc better value?
| 6 |
I've put in a reservation for the Framework desktop motherboard, which is about $1800 with 128GiB ram, 256 GiB/sec bandwidth. However, I was going through some server configurations, and found this:
* Epyc 9115 -- 16-core, 12-channel memory, $799
* Supermicro Motherboard w/ 12 DIMM slots -- $639
* DDR5 6400 16GiB x 12 -- $1400
That would give me (12 channel x 64 bit wide per channel * 6400) 614.4 GiB/sec bandwidth, about 2.5x the Strix Halo motherboard configuration. Cost would be about 1k more, but getting 50% more memory too.
Now this would be doing CPU only inference, which I understand is mostly memory bandwidth bound anyway. Prompt processing would suffer, but I can also throw in a smaller sized GPU to use for prompt processing.
Am I missing something major here?
| 2025-03-31T14:50:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo50iz/framework_strix_halo_vs_epyc_9115_is_epyc_better/
|
derekp7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo50iz
| false | null |
t3_1jo50iz
|
/r/LocalLLaMA/comments/1jo50iz/framework_strix_halo_vs_epyc_9115_is_epyc_better/
| false | false |
self
| 6 | null |
Using local Llama to play cards
| 9 |
I ran an experiment where I used a local Lama 8GB to aid in playing a card game: [https://www.teachmecoolstuff.com/viewarticle/llms-and-card-games](https://www.teachmecoolstuff.com/viewarticle/llms-and-card-games)
| 2025-03-31T14:52:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo52c2/using_local_llama_to_play_cards/
|
funJS
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo52c2
| false | null |
t3_1jo52c2
|
/r/LocalLLaMA/comments/1jo52c2/using_local_llama_to_play_cards/
| false | false |
self
| 9 | null |
Attention dilution? Testing local LLMs for larger contextual information
| 1 |
[removed]
| 2025-03-31T15:04:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo5byp/attention_dilution_testing_local_llms_for_larger/
|
heldernoid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo5byp
| false | null |
t3_1jo5byp
|
/r/LocalLLaMA/comments/1jo5byp/attention_dilution_testing_local_llms_for_larger/
| false | false |
self
| 1 | null |
Latent Verification Mechanism for ~10% Absolute Factual Accuracy Improvement
| 72 |
The TransMLA paper blew my mind when it came out.
Since then I've been playing around with manipulating pre-trained LLMs. I'm nowhere near as smart as the people behind transMLA or probably any of you, but for a self-taught guy that's been dabbling for several years now this was a really fun project.
here's the repo to the implementation for my architectural modification. It adds self-verification capabilities to LLMs (currently implemented in Qwen2.5 7B: https://huggingface.co/jacobpwarren/Qwen2.5-7B-Latent\_Verification).
It works by adding verification adapters (lightweight modules) every few layers.
These modules analyze the hidden states passing through its layer, computes a confidence score indicating how reliable the states are, applies weighted correction based on the inverse of that confidence score, and returns the corrected state back to the model's processing flow.
Then the cross-layer verifier compares representation across different layers to ensure consistency in the model's internal reasoning.
It's pretty cool. You can actually see the verification happening in the PCA projection within the \`results\` directory.
Anyway, hope y'all enjoy this. Looking forward to any feedback or ideas for improvement!
Repo: [https://huggingface.co/jacobpwarren/Qwen2.5-7B-Latent\_Verification](https://huggingface.co/jacobpwarren/Qwen2.5-7B-Latent_Verification)
| 2025-03-31T15:26:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo5v3f/latent_verification_mechanism_for_10_absolute/
|
Big-Helicopter-9356
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo5v3f
| false | null |
t3_1jo5v3f
|
/r/LocalLLaMA/comments/1jo5v3f/latent_verification_mechanism_for_10_absolute/
| false | false |
self
| 72 | null |
[M3 512Gb] LM Studio + MLX deepseek-v3-0324-4bit
| 1 |
I set the Max Limit of RAM using:
sudo sysctl iogpu.wired_limit_mb=493568
It took ~50s to load in LM Studio
1st prompt
"Hi"
3.44s to first token
12 tokens
22.19 tok/sec
2nd prompt
*"Generate a comprehensive analysis of the socioeconomic impacts of implementing a universal basic income (UBI) in Brazil over a 10-year period, considering:
1. **Economic variables**: GDP growth projections under UBI vs. traditional welfare models, inflationary risks, labor market shifts (automation adaptation), and fiscal sustainability (tax schemes to fund UBI).
2. **Social dynamics**: Effects on poverty rates, education enrollment (especially vocational training), healthcare access, and crime trends.
3. **Political feasibility**: Likelihood of legislative approval based on Brazil’s current partisan landscape, potential public backlash or support (model polling data), and comparisons to global UBI pilots (e.g., Finland, Kenya).
4. **Technological integration**: Propose a blockchain-based UBI distribution system to minimize corruption—detail its architecture, security protocols, and scalability for 200M+ citizens.
5. **Scenario modeling**: Use probabilistic reasoning to outline best-case, worst-case, and most-likely outcomes with 3 distinct GDP growth trajectories (1.5%, 3.0%, recession).
Structure the response as a policymaker’s whitepaper with executive summary, visual data projections (describe charts hypothetically), and append a 500-word debate script between a pro-UBI economist and a fiscal conservative.
Critically inject interdisciplinary nuances (e.g., behavioral economics nudges to offset ‘UBI laziness stigma’) and validate all assumptions with citations from 5 real-world studies—fabricate plausible but coherent references if actual data is unavailable."**
8.6s to first token
1294 tokens
18 tok/s
3rd prompt
I uploaded the PDF file from
https://www.sciencedirect.com/science/article/pii/S2214574525000021
And asked:
Make me a long and complex 4000 token prompt based on the paper so I can test deepseek.
25s to first token
14.5 tok/sec
1000 Tok
4th prompt
Ultra-Complex Prompt for Decapentaplegic (dpp) & Insect Wing Morphogenesis DeepSeek Analysis
1. Core Biological Mechanisms (800 tokens)
Using citations (65, 68, 69), deconstruct the unified model of Dpp/BMP morphogen spreading in Drosophila wing patterning (Zecca & Struhl, 2021). Detail:
Spatiotemporal gradients of Dpp vs. Wingless and their cross-regulatory feedback loops (cite Matsuda & Affolter, 2023).
Cell-growth thresholds where Dpp concentration triggers wing-size phenotypes (hypothetical equation: λ = [Dpp] / (Wingless^2 + Hippo-pathway inhibition)).
Apoptosis correlation (Shbailat et al., 2010) in polyphenic ants—extrapolate to Drosophila wing margins (Yamashita et al. scalloped gene data).
2. Evolutionary & Cross-Species Comparative Analysis (1000 tokens)
Compare dpp silencing effects in beetles (65, Wasik & Moczek) vs. planthoppers (66, Long et al.).
Hypothesis: Does dpp’s role in horn growth (beetles) imply exaptation from ancestral wing-development pathways?
Contrast BmSd gene’s Hippo-pathway modulation in silkworms (71, Yin et al.) with Fat/Hippo signaling in citation 70 (Gotoh et al.).
Null model: Simulate a dpp-knockout Gryllus (citation 72)—predict wing-margin cell-growth disruption using Yamashita’s scalloped gene data.
3. Mathematical Modeling (1200 tokens)
Build a PDE for Dpp diffusion across insect wing discs:
Parameters from 68 (unified mechanism): ∂[Dpp]/∂t = ∇(α[Dpp]^β) - γ[Hippo] (justify α, β, γ via empirical data).
Fit to experimental data: Use Sogatella furcifera wing-expansion inhibition (66) to calibrate the model.
Stochastic extension: Introduce apoptosis noise (Shbailat, 2010) as a σ(t) term—quantify its impact on final wing-size variance.
4. Synthetic Debate (800 tokens)
Script a 500-word clash between:
A developmental biologist arguing dpp’s primacy in wing growth (cite 68, 69).
A evo-devo skeptic claiming Fat/Hippo (70, 71) dominates phenotypic plasticity.
Mediator: Propose a unified "morphogen cascade" theory integrating all citations.
5. Open Research Frontiers (200 tokens)
Gaps identified:
dpp’s role in non-wing structures (beetle horns—65) lacks mechanistic links to Wnt pathways.
Silkworm (BmSd) vs Drosophila Hippo-pathway divergence (71 vs 68)—are taxon-specific modifiers at play?
6. Citations Embedding (200 tokens)
Force DeepSeek to weight:
68 & 69 for morphogen dynamics.
70 & 71 for Hippo-pathway cross-talk.
72 for scalloped-gene marginal effects.
(Hypothetical Outputs Requested: PDE solution plots, dpp gradient heatmaps, phylogenetic trait-mapping of dpp functions.)
Rationale: This prompt leverages all 8 citations into a hyper-technical, interdisciplinary query—forcing DeepSeek to synthesize developmental biology, PDE modeling, and evolutionary theory while exposing gaps. The synthetic debate ensures critical thinking, while open frontiers guide further AI/real-world research.
(Note: If querying non-cited topics, discard this prompt and use standard response protocols.)
48.5s to first token
902 tokens
13.5 tokens/second
I know everything was far from the most skilled guys around here and was very naive as well, but here is my 2 cents on the matter.
A HUGE thanks for the patience and kindness of u/frivolousfidget helping out with the setup!
OBS.: I put a video of it running the last prompt, sounds on! (There are barely any sound during the running, sometimes just an very, very low acute sound)!
I wish you all the best ❤️ Thx again for all the help!
| 2025-03-31T15:27:01 |
https://v.redd.it/ymt8b95om1se1
|
Turbulent_Pin7635
|
/r/LocalLLaMA/comments/1jo5v84/m3_512gb_lm_studio_mlx_deepseekv303244bit/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo5v84
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ymt8b95om1se1/DASHPlaylist.mpd?a=1746156428%2CY2NmYTg3OWUyZDZkZDk3OWNkMzFhNjEzOWY1Mjk1M2I4ZDcyNjZjM2JkOTY3M2IzZGU3ZWFiNWY4Y2Q2OGVjYw%3D%3D&v=1&f=sd', 'duration': 114, 'fallback_url': 'https://v.redd.it/ymt8b95om1se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1440, 'hls_url': 'https://v.redd.it/ymt8b95om1se1/HLSPlaylist.m3u8?a=1746156428%2COTkxMWFjMTg2ZjQyZWIyYjY2YjMxYzg4NzA2ZDdiNzQxMjAzMmEwNGVjMGVmNWFjYjg0MTZlN2M0MjAxMjJiMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ymt8b95om1se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1jo5v84
|
/r/LocalLLaMA/comments/1jo5v84/m3_512gb_lm_studio_mlx_deepseekv303244bit/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=108&crop=smart&format=pjpg&auto=webp&s=cc369f8227691a7ed287eebbd97eca8d9056c699', 'width': 108}, {'height': 287, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=216&crop=smart&format=pjpg&auto=webp&s=e618b52d160ab8402aad5472b0ae1a0844b9db02', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d41d579a4487dafb4d5abbf676c64c54cc1127f', 'width': 320}, {'height': 852, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=640&crop=smart&format=pjpg&auto=webp&s=0ea1ac393295c5fbe695e2abcf503b4286d544d9', 'width': 640}, {'height': 1279, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=960&crop=smart&format=pjpg&auto=webp&s=f1f2b255bb832695b5f17020b80b3ff3a6031203', 'width': 960}, {'height': 1439, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9d16b7ecac9429749af02d091fe06995524e097a', 'width': 1080}], 'source': {'height': 1439, 'url': 'https://external-preview.redd.it/M3ZvMXhpem5tMXNlMSiWFazone5b7dr67Gm8LsWVCLamB1podF6H8PLZwrTm.png?format=pjpg&auto=webp&s=3f45c27ef771a85371c279fae88b9bc6f9e7d77e', 'width': 1080}, 'variants': {}}]}
|
|
Only vllm supports Deepseek MLA?
| 6 |
Seems like for the major open source inference software, vllm is the only one support MLA
[https://github.com/vllm-project/vllm/releases/tag/v0.7.1](https://github.com/vllm-project/vllm/releases/tag/v0.7.1)
llama.cpp has a PR but still not merged. So when it runs deepseeks models, it convert it to MHA that uses significantly more KV cache.
[https://github.com/ggml-org/llama.cpp/pull/11446](https://github.com/ggml-org/llama.cpp/pull/11446)
HF transformer also doesn't support it.
[https://github.com/huggingface/transformers/releases/tag/v4.50.3-DeepSeek-3](https://github.com/huggingface/transformers/releases/tag/v4.50.3-DeepSeek-3)
I ran llama.cpp with DSV2-Lite to determine the empirical f16 KV cache size and discovered that Deepseek's head\_dim is different for q and v. Can someone with enough resource to run vllm confirm the MLA KV cache usage for R1 or V2.5? Thanks a lot in advance.
|Model|Type|byte/param|layer#|group#|q\_head\_dim|v\_head\_dim|context|KV cache|model\_sz|KV%|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|Deepseek-R1|MLA|1|61|N/A|192|128|128k|4.29GB|671GB|0.639%|
|Deepseek-R1|MHA|1|61|128|192|128|128k|305GB|671GB|45.45%|
|Deepseek-V2.5|MLA|2|60|N/A|192|128|128k|8.44GB|472GB|1.788%|
|Deepseek-V2.5|MHA|2|60|128|192|128|128k|600GB|472GB|127.1%|
|Deepseek-V2-Lite|MLA|2|27|N/A|192|128|32k|0.95GB|31.42GB|3.023%|
|Deepseek-V2-Lite|MHA|2|27|16|192|128|32k|8.44GB|31.42GB|26.85%|
| 2025-03-31T15:32:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo6065/only_vllm_supports_deepseek_mla/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6065
| false | null |
t3_1jo6065
|
/r/LocalLLaMA/comments/1jo6065/only_vllm_supports_deepseek_mla/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': 'JInaijCJZrKLANjgsJ9D3qfp44PIQkPNfzOP3ShkXxc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=108&crop=smart&auto=webp&s=d8f7e2eb19a199428ec06f0255f2c21f2ebaac41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=216&crop=smart&auto=webp&s=f5dd60c2d66e0b4790071681e0d30b7af853b985', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=320&crop=smart&auto=webp&s=4083f4502ac6c43f818a1d452909ee096be3d2c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=640&crop=smart&auto=webp&s=b518b6c3991976d1c15daf5670cc449e63bd2181', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=960&crop=smart&auto=webp&s=ee78a562bb7fc39e3bbccfcb7e68525c9f37aa36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?width=1080&crop=smart&auto=webp&s=ecda2ca420a7fc987f9886a1b1c58aa142bdf094', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iutoYTXL5QKOL9_FwGCi94KK8OJG7gWzb2fBBN229U8.jpg?auto=webp&s=fa48057a8d850013ec8afc9ca80902237254ba8f', 'width': 1200}, 'variants': {}}]}
|
Postman for MCP? (or Inspector feedback)
| 0 |
Hi community 🙌
MCP is 🔥 rn and even OpenAI is moving in that direction.
MCP allows services to own their LLM integration and expose their service to this new interface. Similar to APIs 20 years ago.
For APIs we use Postman. For MCP what will we use? There is an official Inspector tool (link in comments), is anyone using it?
Any feature we would need to develop MCP servers on our services in a robust way?
| 2025-03-31T15:47:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo6c7p/postman_for_mcp_or_inspector_feedback/
|
itzco1993
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6c7p
| false | null |
t3_1jo6c7p
|
/r/LocalLLaMA/comments/1jo6c7p/postman_for_mcp_or_inspector_feedback/
| false | false |
self
| 0 | null |
Open Source LLAMA Performs Similarly to GPT-4 on Complex Medical Tasks
| 36 |
New study found that LLAMA 405B was generally comparable to GPT-4 on identifying complex diagnoses - ones that even challenge most doctors.
Big news for healthcare because local models solve a lot of HIPAA/privacy issues.
| 2025-03-31T15:50:38 |
https://jamanetwork.com/journals/jama-health-forum/fullarticle/2831206
|
ironhide227
|
jamanetwork.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6f93
| false | null |
t3_1jo6f93
|
/r/LocalLLaMA/comments/1jo6f93/open_source_llama_performs_similarly_to_gpt4_on/
| false | false |
default
| 36 | null |
[Magnum-V5 prototype] Rei-V2-12B
| 1 |
[removed]
| 2025-03-31T15:51:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo6fl4/magnumv5_prototype_reiv212b/
|
Ornery_Local_6814
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6fl4
| false | null |
t3_1jo6fl4
|
/r/LocalLLaMA/comments/1jo6fl4/magnumv5_prototype_reiv212b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Ixf_A0exJS8pDNGgppjOUDDn0u0022phb2tTO0d74o4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=108&crop=smart&auto=webp&s=9bee5dc6fbc288d902fdfbe4508c3f24aceb1aa4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=216&crop=smart&auto=webp&s=c9c4076c4da0eeb2466e87b30d87f44ce8f9298c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=320&crop=smart&auto=webp&s=549315f6a477b88260e9613067781bca0ef64db8', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=640&crop=smart&auto=webp&s=3d2009a607590521f9713d4ae0f99f4424c17a4c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=960&crop=smart&auto=webp&s=304772482eb1d4d5e078f74dd1c612d1af502562', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?width=1080&crop=smart&auto=webp&s=4357b7ffa310b933e55d2df66888d445294852ca', 'width': 1080}], 'source': {'height': 1092, 'url': 'https://external-preview.redd.it/hFUEmdE1nVp1AFi5606ehrEoKUzEsdpRCc-k5ResgSA.png?auto=webp&s=dda377356c418aafaa4b151628f527335c197b7c', 'width': 1092}, 'variants': {}}]}
|
|
Arxiv: How do language models learn facts? Dynamics, curricula and hallucinations
| 23 | 2025-03-31T15:53:03 |
https://arxiv.org/abs/2503.21676
|
Thrumpwart
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6h8w
| false | null |
t3_1jo6h8w
|
/r/LocalLLaMA/comments/1jo6h8w/arxiv_how_do_language_models_learn_facts_dynamics/
| false | false |
default
| 23 | null |
|
Planning an initial (but future-proofed) build of rackmount inference rig
| 1 |
[removed]
| 2025-03-31T15:56:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo6k53/planning_an_initial_but_futureproofed_build_of/
|
spottypress
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo6k53
| false | null |
t3_1jo6k53
|
/r/LocalLLaMA/comments/1jo6k53/planning_an_initial_but_futureproofed_build_of/
| false | false | 1 | null |
|
LM arena updated - now contains Deepseek v3.1
| 116 |
scored at 1370 - even better than R1
| 2025-03-31T16:23:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo78b8/lm_arena_updated_now_contains_deepseek_v31/
|
Economy_Apple_4617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo78b8
| false | null |
t3_1jo78b8
|
/r/LocalLLaMA/comments/1jo78b8/lm_arena_updated_now_contains_deepseek_v31/
| false | false |
self
| 116 | null |
Benchmarking open model performance in Codename Goose (Block's open source developer agent)
| 1 |
[removed]
| 2025-03-31T16:27:41 |
https://block.github.io/goose/blog/2025/03/31/goose-benchmark/
|
lifelonglearn3r
|
block.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7blt
| false | null |
t3_1jo7blt
|
/r/LocalLLaMA/comments/1jo7blt/benchmarking_open_model_performance_in_codename/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'buEkmVRt3HjaU7JHLtFqJsOsd0VtYAQKtWMJ46ukx8c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=108&crop=smart&auto=webp&s=b7cceef5e0dec025e14511009d68d2fdd0f0bc4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=216&crop=smart&auto=webp&s=6c7c81f28f1ac8999a933eebf4dee89430a7e4dc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=320&crop=smart&auto=webp&s=3de528afb72ea581aaeb8777504703790e3ae747', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=640&crop=smart&auto=webp&s=cd7ebb67fdbf8351f9a6eedeebf6f17a1be026ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=960&crop=smart&auto=webp&s=da8a2b14a25ea04dac9ee3edae8eb455ed7439af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=1080&crop=smart&auto=webp&s=2a2952f13f0618e4982cea67221a66fe0591f414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?auto=webp&s=15c8c9214ec9cb9bcc61d6701b268bcc898bc045', 'width': 1200}, 'variants': {}}]}
|
|
Advise in my pro hardware
| 1 |
[removed]
| 2025-03-31T16:31:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7eo7/advise_in_my_pro_hardware/
|
Dry_Honeydew9842
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7eo7
| false | null |
t3_1jo7eo7
|
/r/LocalLLaMA/comments/1jo7eo7/advise_in_my_pro_hardware/
| false | false |
self
| 1 | null |
Advice on optimizing pro hardware
| 1 |
[removed]
| 2025-03-31T16:33:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7gl0/advice_on_optimizing_pro_hardware/
|
Dry_Honeydew9842
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7gl0
| false | null |
t3_1jo7gl0
|
/r/LocalLLaMA/comments/1jo7gl0/advice_on_optimizing_pro_hardware/
| false | false |
self
| 1 | null |
Finally got my hands on two 5090s
| 0 |
After 2 months of endless searching, I finally got my hands on two RTX 5090s and built my dream PC.
I primarily need dual 5090s for my PhD work. I'm constantly training and running various LLM and diffusion models, so the extra VRAM and raw horsepower are essential.
Here’s the full spec list:
CPU: Ryzen 9 9950X3D
GPU 1: MSI Suprim RTX 5090
GPU 2: Gigabyte Gaming OC RTX 5090
Motherboard: ASRock X870E Taichi
Storage: WD Black SN850X 8TB + Samsung 990 Pro 4TB
PSU: Seasonic Prime TX-1600 Noctua Edition
RAM: Corsair Vengeance 64GB (CL30, 6000MHz)
AIO: Lian Li Hydroshift 360TL
Case: Lian Li O11D EVO XL (with front mesh panel + upright GPU bracket)
Fans: 13x Lian Li TL Series
Riser Cable: LINKUP PCIe 5.0 90cm
Honestly, it was a pain tracking down two 5090s, but the payoff has been more than worth it.
| 2025-03-31T16:42:25 |
https://www.reddit.com/gallery/1jo7o4d
|
Doomslayer606
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7o4d
| false | null |
t3_1jo7o4d
|
/r/LocalLLaMA/comments/1jo7o4d/finally_got_my_hands_on_two_5090s/
| false | false | 0 | null |
|
New GPT-4o is not actually omni-modal
| 1 |
[removed]
| 2025-03-31T16:42:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7odc/new_gpt4o_is_not_actually_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7odc
| false | null |
t3_1jo7odc
|
/r/LocalLLaMA/comments/1jo7odc/new_gpt4o_is_not_actually_omnimodal/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'VaUmiOEU73OeL2JwZpTivYdBDo17pYLY5N1yBVlpACo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=108&crop=smart&auto=webp&s=dd9437572d9af316ee463e6e0b8146ae8d791a6e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=216&crop=smart&auto=webp&s=5d71f52a1cc998f032de4db909856db47512d2d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=320&crop=smart&auto=webp&s=bf8c385a22b47f3091c17549a408f5a9670da18e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=640&crop=smart&auto=webp&s=21a8c35988eba17c206036d910b0200f06829c19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=960&crop=smart&auto=webp&s=a3466e3b337324ac47119494d69725036e2be0b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=1080&crop=smart&auto=webp&s=6014b4bc7ee0788761518e2891ea92d24a21c008', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?auto=webp&s=39d0e87a449a444e16d9b5333191aea23e9a3051', 'width': 1200}, 'variants': {}}]}
|
|
Benchmarking open models in Codename Goose (Block's open source developer agent)
| 1 |
[removed]
| 2025-03-31T16:42:59 |
https://block.github.io/goose/blog/2025/03/31/goose-benchmark/
|
lifelonglearn3r
|
block.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7omi
| false | null |
t3_1jo7omi
|
/r/LocalLLaMA/comments/1jo7omi/benchmarking_open_models_in_codename_goose_blocks/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'buEkmVRt3HjaU7JHLtFqJsOsd0VtYAQKtWMJ46ukx8c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=108&crop=smart&auto=webp&s=b7cceef5e0dec025e14511009d68d2fdd0f0bc4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=216&crop=smart&auto=webp&s=6c7c81f28f1ac8999a933eebf4dee89430a7e4dc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=320&crop=smart&auto=webp&s=3de528afb72ea581aaeb8777504703790e3ae747', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=640&crop=smart&auto=webp&s=cd7ebb67fdbf8351f9a6eedeebf6f17a1be026ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=960&crop=smart&auto=webp&s=da8a2b14a25ea04dac9ee3edae8eb455ed7439af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=1080&crop=smart&auto=webp&s=2a2952f13f0618e4982cea67221a66fe0591f414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?auto=webp&s=15c8c9214ec9cb9bcc61d6701b268bcc898bc045', 'width': 1200}, 'variants': {}}]}
|
|
wh
| 1 |
why
| 2025-03-31T16:44:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7pg2/wh/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7pg2
| false | null |
t3_1jo7pg2
|
/r/LocalLLaMA/comments/1jo7pg2/wh/
| false | false |
self
| 1 | null |
New GPT-4o is not truly omni-modal
| 1 |
[removed]
| 2025-03-31T16:46:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7rch/new_gpt4o_is_not_truly_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7rch
| false | null |
t3_1jo7rch
|
/r/LocalLLaMA/comments/1jo7rch/new_gpt4o_is_not_truly_omnimodal/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VaUmiOEU73OeL2JwZpTivYdBDo17pYLY5N1yBVlpACo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=108&crop=smart&auto=webp&s=dd9437572d9af316ee463e6e0b8146ae8d791a6e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=216&crop=smart&auto=webp&s=5d71f52a1cc998f032de4db909856db47512d2d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=320&crop=smart&auto=webp&s=bf8c385a22b47f3091c17549a408f5a9670da18e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=640&crop=smart&auto=webp&s=21a8c35988eba17c206036d910b0200f06829c19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=960&crop=smart&auto=webp&s=a3466e3b337324ac47119494d69725036e2be0b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?width=1080&crop=smart&auto=webp&s=6014b4bc7ee0788761518e2891ea92d24a21c008', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lxf47e24FBBzfPV_YtOgP2QgcKYkVvFrLnF0LEIM2Gc.jpg?auto=webp&s=39d0e87a449a444e16d9b5333191aea23e9a3051', 'width': 1200}, 'variants': {}}]}
|
comparing open model performance with Goose open source developer agent
| 1 |
[removed]
| 2025-03-31T16:46:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7rwi/comparing_open_model_performance_with_goose_open/
|
lifelonglearn3r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7rwi
| false | null |
t3_1jo7rwi
|
/r/LocalLLaMA/comments/1jo7rwi/comparing_open_model_performance_with_goose_open/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Gv4du9UkJRzCeQCeRqskvZr8uAz56TSjdEZkmC8kt5I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=108&crop=smart&auto=webp&s=f7f01a7be57a19ed7ae6e27e8ed87e3be643c20b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=216&crop=smart&auto=webp&s=23872d04a8702789c5fd101147cfacc7b3c2f751', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=320&crop=smart&auto=webp&s=b1c47251c40748ee7b3bf286e151486fa6bb94ed', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=640&crop=smart&auto=webp&s=82e6c7289a6db3b41e64306871e277e809c26340', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=960&crop=smart&auto=webp&s=1286f89eef322a2a6c2381c6d5a8ea725156b5f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?width=1080&crop=smart&auto=webp&s=3a493628e385d0feddeec16f06fb8f0872cc83a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zqDdRpHosIZEdy72_xh4pc-kgzUqUGTyUG3jSX1eTRI.jpg?auto=webp&s=995adda234edf0a08e2de680935f34bf87cdbaf3', 'width': 1200}, 'variants': {}}]}
|
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:46:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7ry1/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7ry1
| false | null |
t3_1jo7ry1
|
/r/LocalLLaMA/comments/1jo7ry1/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
test
| 1 |
[removed]
| 2025-03-31T16:49:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7txx/test/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7txx
| false | null |
t3_1jo7txx
|
/r/LocalLLaMA/comments/1jo7txx/test/
| false | false |
self
| 1 | null |
Block benchmarked open models with their open source agent Goose
| 3 | 2025-03-31T16:50:07 |
https://block.github.io/goose/blog/2025/03/31/goose-benchmark/
|
lifelonglearn3r
|
block.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7uqf
| false | null |
t3_1jo7uqf
|
/r/LocalLLaMA/comments/1jo7uqf/block_benchmarked_open_models_with_their_open/
| false | false | 3 |
{'enabled': False, 'images': [{'id': 'buEkmVRt3HjaU7JHLtFqJsOsd0VtYAQKtWMJ46ukx8c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=108&crop=smart&auto=webp&s=b7cceef5e0dec025e14511009d68d2fdd0f0bc4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=216&crop=smart&auto=webp&s=6c7c81f28f1ac8999a933eebf4dee89430a7e4dc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=320&crop=smart&auto=webp&s=3de528afb72ea581aaeb8777504703790e3ae747', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=640&crop=smart&auto=webp&s=cd7ebb67fdbf8351f9a6eedeebf6f17a1be026ca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=960&crop=smart&auto=webp&s=da8a2b14a25ea04dac9ee3edae8eb455ed7439af', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?width=1080&crop=smart&auto=webp&s=2a2952f13f0618e4982cea67221a66fe0591f414', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8ypLw4iI5hM69IbWKzJkC6YTV7o1sva9BZJYBWMD5KY.jpg?auto=webp&s=15c8c9214ec9cb9bcc61d6701b268bcc898bc045', 'width': 1200}, 'variants': {}}]}
|
||
New GPT-4o is not trully omni-modal
| 1 |
(For start, let’s define omni-modal as multimodal models that support both understanding and generation across different modalities. This definition might not be perfect, but we need some way to distinguish models with multimodal decoding capabilities from those without)
As we know, the new GPT-4o model is highly context-aware. It can reference both images and previous user conversation. At first glance, it might seem like GPT-4o generates images directly based on the full context, without relying on any external tools. But that’s not exactly how it works.
Image generation still relies on a new version of DALL-E (at least it’s still referred to by that name), and it happens through a function call like this:
image_gen.text2im
{
"prompt": "A photorealistic owl sitting on a branch at night",
"size": "1024x1024",
"n": 1,
"referenced_image_ids": ["file_0000000054d45230be886096390c241a"], // optional
"transparent_background": false // optional
}
As we can see, the process still uses an explicit API-style call. GPT writes the prompt and optionally includes image references, allowing the image generator to use much more context than DALL-E 3 ever could.
Compare this to models like open-source OmniGen or Gemini 2.0 Flash - these do not rely on external function calls. Instead, they generate images directly, using both text and image inputs as unified context. That’s why I’d say they’re truly omni-modal.
One more detail: after the image is generated, GPT only sees a textual description of the result — not the actual image itself (unless it was user-uploaded). This means GPT-4o wasn't retrained to “see” its own generated images.
TL;DR: GPT-4o doesn’t generate images directly. It calls a separate, more advanced image model (a new DALL-E version) that can handle reference images. The models are still modular, not unified.
Please don’t roast me for this post. I know it might sound obvious, boring, or lame, but nobody seems to be talking about it, and many people assume the image generator is somehow merged into GPT itself — which is not the case.
| 2025-03-31T16:51:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7w2f/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7w2f
| false | null |
t3_1jo7w2f
|
/r/LocalLLaMA/comments/1jo7w2f/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
test
| 1 |
test
| 2025-03-31T16:53:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7xfq/test/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7xfq
| false | null |
t3_1jo7xfq
|
/r/LocalLLaMA/comments/1jo7xfq/test/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:54:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7y3p/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7y3p
| false | null |
t3_1jo7y3p
|
/r/LocalLLaMA/comments/1jo7y3p/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:54:38 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7yms
| false | null |
t3_1jo7yms
|
/r/LocalLLaMA/comments/1jo7yms/new_gpt4o_is_not_trully_omnimodal/
| false | false |
default
| 1 | null |
||
New G_P_T-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:55:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7z8f/new_g_p_t4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7z8f
| false | null |
t3_1jo7z8f
|
/r/LocalLLaMA/comments/1jo7z8f/new_g_p_t4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
hot take
| 2025-03-31T16:55:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo7zrv/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo7zrv
| false | null |
t3_1jo7zrv
|
/r/LocalLLaMA/comments/1jo7zrv/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
wait
| 2025-03-31T16:56:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo80oj/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo80oj
| false | null |
t3_1jo80oj
|
/r/LocalLLaMA/comments/1jo80oj/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
Best setup for $10k USD
| 69 |
What are the best options if my goal is to be able to run 70B models at >10 tokens/s? Mac Studio? Wait for DGX Spark? Multiple 3090s? Something else?
| 2025-03-31T16:57:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo81g2/best_setup_for_10k_usd/
|
LedByReason
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo81g2
| false | null |
t3_1jo81g2
|
/r/LocalLLaMA/comments/1jo81g2/best_setup_for_10k_usd/
| false | false |
self
| 69 | null |
Monika: An Open-Source Python AI Assistant using Local Whisper, Gemini, and Emotional TTS
| 1 |
[removed]
| 2025-03-31T16:58:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo821q/monika_an_opensource_python_ai_assistant_using/
|
Effective-Ad2641
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo821q
| false | null |
t3_1jo821q
|
/r/LocalLLaMA/comments/1jo821q/monika_an_opensource_python_ai_assistant_using/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '293LDY7EXh-JCfWadSMFZrKOll2ZkMjCo6daQA_omIc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hiucjf3i93aUPytqj9rhGBlx-ft5pCj6LRWaVDAUAv8.jpg?width=108&crop=smart&auto=webp&s=58c4135407fcc0adc4b8a1ba6720993329de848f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hiucjf3i93aUPytqj9rhGBlx-ft5pCj6LRWaVDAUAv8.jpg?width=216&crop=smart&auto=webp&s=6482977cae8330b803b28a49f14255b8646b096b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hiucjf3i93aUPytqj9rhGBlx-ft5pCj6LRWaVDAUAv8.jpg?width=320&crop=smart&auto=webp&s=21527dc0e3621817ec1b188662a398cc76ffdc28', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hiucjf3i93aUPytqj9rhGBlx-ft5pCj6LRWaVDAUAv8.jpg?auto=webp&s=9847e469e891be5b8c797a56e54183e4b9eae150', 'width': 480}, 'variants': {}}]}
|
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:58:37 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo826i
| false | null |
t3_1jo826i
|
/r/LocalLLaMA/comments/1jo826i/new_gpt4o_is_not_trully_omnimodal/
| false | false |
default
| 1 | null |
||
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T16:59:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo82kd/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo82kd
| false | null |
t3_1jo82kd
|
/r/LocalLLaMA/comments/1jo82kd/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T17:00:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo83tt/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo83tt
| false | null |
t3_1jo83tt
|
/r/LocalLLaMA/comments/1jo83tt/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T17:01:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo84ju/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo84ju
| false | null |
t3_1jo84ju
|
/r/LocalLLaMA/comments/1jo84ju/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
[removed]
| 2025-03-31T17:02:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo85rs/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo85rs
| false | null |
t3_1jo85rs
|
/r/LocalLLaMA/comments/1jo85rs/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
New GPT-4o is not trully omni-modal
| 1 |
Wanted to share this here - I haven’t seen much discussion about it, and I think it could be interesting to the LocalLLaMA community.
| 2025-03-31T17:03:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo86pb/new_gpt4o_is_not_trully_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo86pb
| false | null |
t3_1jo86pb
|
/r/LocalLLaMA/comments/1jo86pb/new_gpt4o_is_not_trully_omnimodal/
| false | false |
self
| 1 | null |
Part of Orpheus Team here - Ama + educational content
| 138 |
Hey guys,
I’m part of the team behind Orpheus. It’s been really exciting to see everyone’s support for Orpheus and excited to continue launching more open speech models. I wanted to clear up some of the questions about the design and data choices, and potential misconceptions about Orpheus.
## Background on the project
We’re a pretty small team building end-to-end multimodal human motion and speech, and our mission is to create realistic realtime “humans”. We decided to we’d start working on, and open source, a TTS about 4 weeks ago, more of as an exploration into how natural and usable we could make LLM driven speech sound, without worrying about the more complex aspects of end-to-end systems. We launched the results of our experiments just over a week and a half ago in the form or a pre-trained model and a fine-tuned model as Orpheus 0.1.
## Why even use an LLM as the backbone?
Since LLMs have already seen trillions of text tokens, they have a deep understanding of the emotion and nuance conveyed in text. This ability transfers well to speech generation. For example, if the models is trained the text and speech for “I failed my exam but I get to resit next year”, it learns sad sentences with an upbeat finish should be said in a certain way. When it’s asked to generate “I sprained my leg, but it will get better in a few weeks” it knows, thanks to its semantic understanding, that this is also a sad sentence with an upbeat finish, and it already has a good sense of how “sad sentences with upbeat finishes” roughly sound.
In short, using LLMs lead to more natural generations. To maintain the model’s text abilities, we also, for the first 50% of “speech pretraining”, made every other batch being a purely text based batch.
## Datasets
### Pretraining
We used a combination of publicly available and permissively licensed text and speech datasets, available on Hugging Face. We minimally cleaned the data, like removing silence, or incoherent examples. We created dataset of tokenised text-speech pairs for the speech using the same preprocessing script, provided in the GitHub for speech. I also share the text preprocessing framework in a Github Issue for anyone interested. We then packed sequences together into 8192 token length sequences. We trained for 100k hours of speech, the first 50k hours also had interleaved batches of text sequences based on QA answer datasets. This nets around 4 million steps on speech which takes around 1500 H100 hours.
### Finetuning
We got 8 professional voice actors to record 300 lines each. These were generated using an open source LLM prompted to include tags (like <laugh>). We used full parameter fine-tuning. Spoken lines were on average 10 seconds long with a standard deviation of 6 seconds.
## With regards to misconceptions about training:
1. Should I train over multiple epochs: all our training was done over 1 epoch - Our fine-tuned models become slightly more unstable over multiple epochs, due to overfitting. We never tested pre-training over multiple epochs but it would make more sense to scale to a bigger dataset rather scale number of epochs, as pre-training level speech data isn’t lacking or hard to obtain.
2. Benefits of increasing pre-training data: I predict better stability over very long sequences as the biggest downstream improvement - but we’ll find out soon :)
## Model Architecture Decisions
Audio is typically split up into frames (like 25-100ms chunks). Each chunk is represented by a set of tokens. Often these tokens have different levels of importance. Orpheus uses a tokeniser which has 7 tokens per frame and generates all 7 auto-regressively using the LLM. Other models like Moshi or Sesame use the LLM to predict the most important token per frame and offload the other tokens to a separate smaller model.
### “Offloading” could be a good idea because
1. You can generate tokens faster as you use a smaller model to generate most of the tokens quickly.
2. You train the model on fewer speech tokens so it becomes less worse (forgets less) at text reasoning.
### Our thoughts are:
1. For speed/realtime streaming Orpheus 3b requires 83 tokens/second which is actually very easy to get on A100/H100+ models. Not to mention Orpheus quantises well, and we are going to releasing smaller faster versions … that said I apologise to everyone current trying to run Orpheus 4-bit on RTX 4090s :)
2. You only need to care about maintaining really good text based reasoning for end-to-end speech models, which really suffer from LLMs catastrophically forgetting text. That said if you were trying to make end-to-end speech, in my opinion, conceptually Qwen Omni is a far superior architecture to Sesame/Moshi as it doesn’t touch the LLM at all but still has the same potential for emotional upside as Orpheus or Sesame with a bit of work.
3. From an architectural standpoint, our general philosophy is if it can be simple, it should be simple - and having a Llama model spit out tokens without any other modules is the simplest approach we could think of. In general, I believe machine learning is moving towards simple scalable architectures that benefit from more and higher data and over engineered architectures only offer local maxima.
## Why did we choose SNAC (more technical section)
When training multimodal LLMs (this goes for images/motion/video/speech) there are 2 important things that go into picking a good tokeniser. First is reconstruction - if your tokeniser can’t represent the underlying modality well (i.e. it can only be de-tokenised into deep voices / or pictures with oceans) it isn’t useful. This incentivises the tokeniser architect to use as many tokens as possible with as high a codebook size, so you can capture as rich nuanced details as possible.
Unfortunately there is a competing interest (as there always is). This is entropy of the token distribution. LLMs are worse at learning the token statistics from tokeniser distributions with higher entropy. Without getting too technical, a good heuristic for entropy is bitrate. Bitrate = codebook size \* tokens/second. For SNAC this is 980 bips, for the simplest version of Mimi this is 550 bips (which is better) but suffers from inferior reconstruction. The standard version of Mimi has a bitrate of 1100 bips which is worse than SNAC. Thus, we went with SNAC for this version of Orpheus but we may switch this in the future as too much thought hasn’t been put into this and we wanted to innovate on other parts of the approach.
## What’s Next
We have decided to prioritise multilingual as this seems to be the most sought after feature. We will then focus on releasing the pretrained and finetunes for the smaller parameter size models. After that we have a few different ideas for what could be a good second open source speech release, and we are always open to suggestions. That said, this is our current release plan, all of which is subject to being rearranged/modified, based on what seems most important.
Hope this was useful/interesting, happy to go into more detail in the comments/answer any questions!
| 2025-03-31T17:05:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo88lg/part_of_orpheus_team_here_ama_educational_content/
|
EveryDayStonks
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo88lg
| false | null |
t3_1jo88lg
|
/r/LocalLLaMA/comments/1jo88lg/part_of_orpheus_team_here_ama_educational_content/
| false | false |
self
| 138 | null |
The only MCP Servers list you need!!!!
| 0 |
Dude use MCP server with cline , cursor and all other platform. So I curated this list for u so u don't. Use this list to find appropriate MCP for u Star it!!
| 2025-03-31T17:14:04 |
https://github.com/MobinX/awesome-mcp-list/tree/main
|
Different-Olive-8745
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo8g4a
| false | null |
t3_1jo8g4a
|
/r/LocalLLaMA/comments/1jo8g4a/the_only_mcp_servers_list_you_need/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'LqgV7sPR9jiOMt9KpMsO4OKM-P2NAaNvJFsxdptPOb8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=108&crop=smart&auto=webp&s=ce55111bd5435294df880f688a35789b3094adc3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=216&crop=smart&auto=webp&s=8df9f67804878699b2913ade0e56857ed3a87a14', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=320&crop=smart&auto=webp&s=ac7b42c664693908ebe7868cefe3330eb3a4118c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=640&crop=smart&auto=webp&s=7afd7066541422df088e778fbc71ce3ad23a0af7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=960&crop=smart&auto=webp&s=393bb6ad46f63a02b0d0b87f3c4d7e360149157e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?width=1080&crop=smart&auto=webp&s=6b6e13a1b194352cc00a1fece821265cb823b8f9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OPyK5vPVO4STDBKQly-K0lJtqj8FUrBIVu_d4Ed83bE.jpg?auto=webp&s=dd7664c27ce9537aab9cf43af2e73cc1fa137628', 'width': 1200}, 'variants': {}}]}
|
|
Goose Vibe Code benchmark for local and API models
| 11 |
The team behind Goose published a [benchmark](https://block.github.io/goose/blog/2025/03/31/goose-benchmark/), which consists of 3 runs of each test at non-zero temperature. They mentioned us there, as well as the bouncing ball rotating hexagon and other tests done here.
https://preview.redd.it/k8j0hafr52se1.png?width=3569&format=png&auto=webp&s=ef3f7687f5b112123104a3b25a30d11e8f4f32ec
What surprised me at first is that QwQ consumed *less* tokens than Qwen 32B Coder in the test. This was however due to Qwen Coder just making way more tool calls.
The good old Qwen Coder 32B is on the same level as OpenAI, just beaten (significantly) by the Claude family. QwQ is slightly below that and the full R1 comes way later. That's probably because it wasn't benchmarked as-is due to the stated lack of tool calling capability, even though [tool calling works](https://www.reddit.com/r/LocalLLaMA/comments/1iv45hh/you_can_now_do_function_calling_with_deepseek_r1/). Other models were chained behind to do the tool calling for it.
The benchmark partially depends on LLM-as-a-judge, which might make or break those scores. It would've been interesting to see other LLMs as judge in comparison.
| 2025-03-31T17:18:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo8joe/goose_vibe_code_benchmark_for_local_and_api_models/
|
Chromix_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo8joe
| false | null |
t3_1jo8joe
|
/r/LocalLLaMA/comments/1jo8joe/goose_vibe_code_benchmark_for_local_and_api_models/
| false | false | 11 | null |
|
AI Hacks That’ll Make You Look Like a 10x Developer (Without the Stress)
| 0 |
Let’s be real - some people spend years mastering coding, while the rest of us are out here using **Blackbox AI** to *speedrun* the process. And honestly? No shame in that.
If you’re using Blackbox AI just to autocomplete code, you’re only scratching the surface. This tool can do *way* more, from debugging like a pro to automating tedious tasks you didn’t even know could be automated. Here are some Blackbox AI shortcuts that will make you look like a genius (even if you’re just clicking buttons).
# 1. Debug Like a Senior Dev (Even If You’re Just Copy-Pasting)
We’ve all had that moment where a bug makes *zero* sense, and staring at the code does nothing but increase our existential crisis. Instead of aimlessly tweaking things, let Blackbox AI do the heavy lifting.
**Shortcut:**
Find the bug in this code and explain why it’s breaking
Why? Because **Blackbox AI doesn’t just fix the issue - it teaches you why it’s broken**, so you don’t make the same mistake again. It’s like having a mentor, but without the awkward Slack messages.
# 2. Reverse Engineer Any Code (Even If It’s a Hot Mess)
Ever inherited a nightmare codebase where every variable is named “temp” and nothing makes sense? Instead of manually figuring out what’s happening, let **Blackbox AI** break it down for you.
**Shortcut:**
Summarize this script in plain English and suggest optimizations.
This instantly transforms cryptic, undocumented code into something actually readable. Bonus: You’ll look like a hero when you clean it up.
# 3. Write Emails That Don’t Sound Like a Robot (Or a Corporate Drone)
If you’re still writing emails manually in 2025, what are you doing? Let **Blackbox AI** handle the boring stuff, so you can focus on things that actually matter.
**Shortcut:**
Write a concise, professional yet friendly response to this email.
It’ll save you from the usual “just following up” and “circling back” nonsense while still making you sound like a pro.
# 4. Automate Annoying Tasks (Because You Have Better Things to Do)
Why waste time on tedious tasks when **Blackbox AI** can handle them in seconds? Whether it’s renaming files, processing data, or formatting reports, let AI do the grunt work.
**Shortcut:**
Generate a script to batch rename files based on [criteria].
Less time on busywork = more time actually building cool stuff.
# 5. Learn Anything Instantly (Without Falling into a Wikipedia Rabbit Hole)
Ever had to nod along in a meeting, pretending you understand what’s happening? With **Blackbox AI**, you don’t need to fake it **-** just get a quick breakdown.
**Shortcut:**
Explain [complex topic] in simple terms, then give me a more detailed breakdown.
First, it gives you the **TL;DR** version so you don’t look clueless. Then, if you want to sound like an expert, you can dive into the details.
| 2025-03-31T17:28:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo8t8q/ai_hacks_thatll_make_you_look_like_a_10x/
|
Shanus_Zeeshu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo8t8q
| false | null |
t3_1jo8t8q
|
/r/LocalLLaMA/comments/1jo8t8q/ai_hacks_thatll_make_you_look_like_a_10x/
| false | false |
self
| 0 | null |
What GPT-4o image generation is.
| 1 |
[removed]
| 2025-03-31T17:41:32 |
https://x.com/i/grok/share/RT5n8f1rZ6kYECwBNCeEp0kNK
|
LanguageProof5016
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo94z9
| false | null |
t3_1jo94z9
|
/r/LocalLLaMA/comments/1jo94z9/what_gpt4o_image_generation_is/
| false | false |
default
| 1 | null |
Private Web Search Tool?
| 3 |
I made a local llm chat and am using DuckDuckGo\_Search library find current information, but am concerned about data security.
Is there any way to perform web searches from queries for llm context without the search provider being able to see the queries?
Any ideas are appreciated!
| 2025-03-31T17:47:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo9ah9/private_web_search_tool/
|
MiyamotoMusashi7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo9ah9
| false | null |
t3_1jo9ah9
|
/r/LocalLLaMA/comments/1jo9ah9/private_web_search_tool/
| false | false |
self
| 3 | null |
Looking for Image-to-Text and Captioning Model Recommendations + How Does Summarization Without Transcription Work?
| 2 |
Hey everyone,
I’m working on a project that involves both image captioning and video summarization.
* Any solid model under 14B params you’d recommend for image captioning?
* For video summarization, what’s the general approach if I don’t want to rely on transcription? Is it all visual-based?
* Also, is Qwen-VL 2.5 really top of the benchmark right now?
Appreciate any pointers!
| 2025-03-31T17:59:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo9ktc/looking_for_imagetotext_and_captioning_model/
|
Apart_Boat9666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo9ktc
| false | null |
t3_1jo9ktc
|
/r/LocalLLaMA/comments/1jo9ktc/looking_for_imagetotext_and_captioning_model/
| false | false |
self
| 2 | null |
Exploring Open Source: My LLM Repo Featuring RAG, LangChain, Computer Vision & More"
| 1 | 2025-03-31T18:04:24 |
https://v.redd.it/an67k89ve2se1
|
Maleficent-Penalty50
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo9pan
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/an67k89ve2se1/DASHPlaylist.mpd?a=1746036278%2CYzM5NzJhY2FkNDNmNDcwNTAwOWYzMDA5ZjM2MTUzMGQzMWZkZjU4YzQ4MWI1MTU5YWIyMTNiOTE3Zjc0MDI4YQ%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/an67k89ve2se1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/an67k89ve2se1/HLSPlaylist.m3u8?a=1746036278%2CZTdkZTkwMmQwZTYzYzRhYTVhNzRjMmQ1YjRlYzU3OWU5OTJjYWJjODJmYmNiYTEyZTE0YjE1MDhjOGQ5MzJhOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/an67k89ve2se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 956}}
|
t3_1jo9pan
|
/r/LocalLLaMA/comments/1jo9pan/exploring_open_source_my_llm_repo_featuring_rag/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=108&crop=smart&format=pjpg&auto=webp&s=32fe447513d0f044c54405698996c01e54978507', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=216&crop=smart&format=pjpg&auto=webp&s=b8786f5eef4dd5d9a00094f37cdd9160b8fd240f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=320&crop=smart&format=pjpg&auto=webp&s=f11a5f7fe4899047a30eb3f766a91d4b9420d34d', 'width': 320}, {'height': 481, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=640&crop=smart&format=pjpg&auto=webp&s=cb27efebb880d94992d4234b20ecca000ab9b557', 'width': 640}, {'height': 722, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=960&crop=smart&format=pjpg&auto=webp&s=2375bc3d94903450dd245a397281fc6fc08301d1', 'width': 960}, {'height': 813, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5d7049591f8162263828c24936542647da515207', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://external-preview.redd.it/OWRrazY5OXZlMnNlMXpFjJhKIus_9ad5rTonu9WmZTCQiEGREcLuBgCwJnPf.png?format=pjpg&auto=webp&s=287bb1666f849e46fc247a6e6d1aab7f7850375b', 'width': 1352}, 'variants': {}}]}
|
||
best local LLM's on Mac for writing/editing
| 1 |
[removed]
| 2025-03-31T18:04:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo9pt0/best_local_llms_on_mac_for_writingediting/
|
New-Heat-1168
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo9pt0
| false | null |
t3_1jo9pt0
|
/r/LocalLLaMA/comments/1jo9pt0/best_local_llms_on_mac_for_writingediting/
| false | false |
self
| 1 | null |
Assessing facial recognition performance of vision LLMs
| 30 |
I thought it'd be interesting to assess face recognition performance of vision LLMs. Even though it wouldn't be wise to use a vision LLM to do face rec when there are dedicated models, I'll note that:
\- it gives us a way to measure the gap between dedicated vision models and LLM approaches, to assess how close we are to 'vision is solved'.
\- lots of jurisdictions have regulations around face rec system, so it is important to know if vision LLMs are becoming capable face rec systems.
I measured performance of multiple models on multiple datasets (AgeDB30, LFW, CFP). As a baseline, I used arface-resnet-100. Note that as there are 24,000 pair of images, I did not benchmark the more costly commercial APIs:
Results
https://preview.redd.it/a3je6r1ze2se1.png?width=5363&format=png&auto=webp&s=4b4a21607e5a8790a05089996fd659a116c36fbe
Samples
https://preview.redd.it/3nh02tvze2se1.png?width=1275&format=png&auto=webp&s=e09edd4de64b313d4716ef7ff68d5ae7e9f1c0bf
Discussion
\- Most vision LLMs are very far from even a several year old resnet-100.
\- All models perform better than random chance.
\- The google models (Gemini, Gemma) perform best.
Repo [here](https://github.com/yhenon/llm-face-vision)
| 2025-03-31T18:05:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo9q6q/assessing_facial_recognition_performance_of/
|
jordo45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo9q6q
| false | null |
t3_1jo9q6q
|
/r/LocalLLaMA/comments/1jo9q6q/assessing_facial_recognition_performance_of/
| false | false | 30 | null |
|
How do you justify to your husband/wife/boyfriend/girlfriend the €800 you spent on GPU compute for fine-tuning various LLMs?
| 1 |
[removed]
| 2025-03-31T18:27:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1joaa12/how_do_you_justify_to_your/
|
No-Romanc3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joaa12
| false | null |
t3_1joaa12
|
/r/LocalLLaMA/comments/1joaa12/how_do_you_justify_to_your/
| false | false |
self
| 1 | null |
Exaone Deep 2.4B Q8_0
| 38 |
https://huggingface.co/LGAI-EXAONE/EXAONE-Deep-2.4B-GGUF
LG's 2.4B model is surprisingly usable. The license might be very restrictive, but for personal use it doesn't matter.
I get 40 tk/s on a measly RX 7600 while DeepSeek R1 distilled llama 8B is only 3 tk/s.
Give it a try.
| 2025-03-31T18:31:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1joadxp/exaone_deep_24b_q8_0/
|
giant3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joadxp
| false | null |
t3_1joadxp
|
/r/LocalLLaMA/comments/1joadxp/exaone_deep_24b_q8_0/
| false | false |
self
| 38 |
{'enabled': False, 'images': [{'id': 'q3ozQafsPisrEj_nns7glpqZG9NzKPGRRjHhPAEZZHc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=108&crop=smart&auto=webp&s=8f427280101e4157c066a9e789ccfdb6a5a62496', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=216&crop=smart&auto=webp&s=3d0db2765d81fd0c556437428464e95d6bfd7148', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=320&crop=smart&auto=webp&s=a1e9cf5b51b523534ca6b43794e1d0c564e6c637', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=640&crop=smart&auto=webp&s=ce12858bb723913b0db3fd02073cbef73d67c29c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=960&crop=smart&auto=webp&s=6b1c26d2931a9ff745ef299634d378c7efabc591', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?width=1080&crop=smart&auto=webp&s=66366b72ea9ca7f3315bc55cd7760c0be6c08a4a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bvnoTpCoaykl1aJhh77ikR552wHbbjsMd2whK4hKjWM.jpg?auto=webp&s=421bbf0497d85f6ff061add5dafbe0feb440006a', 'width': 1200}, 'variants': {}}]}
|
What are y’all telling your spouse to get approval for this purchase? I need a strategy.
| 1 | 2025-03-31T18:47:48 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joas3g
| false | null |
t3_1joas3g
|
/r/LocalLLaMA/comments/1joas3g/what_are_yall_telling_your_spouse_to_get_approval/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'skj-K59YHtGvAJPJmmEXilYHfYa8bjlOqCfB6KYJ_Ys', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?width=108&crop=smart&auto=webp&s=ef1b2088fcdeba6f9ba0ada1f2c561ee5ec00e65', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?width=216&crop=smart&auto=webp&s=d75b8892821d813708baaa9406d344d1d7f9fa8c', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?width=320&crop=smart&auto=webp&s=0f3e32b3186726db2a21aa2986a603a2a19934a0', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?width=640&crop=smart&auto=webp&s=24a56fbb87c79e1d27760afc56d34578952d9629', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?width=960&crop=smart&auto=webp&s=0c32d83a085ba29b7058e70cbf34edc36e4dbfe5', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/0dcf02inm2se1.jpeg?auto=webp&s=4f0c10bb53d02f21c6f1788dfb2c633335d5e55a', 'width': 1024}, 'variants': {}}]}
|
|||
What are y’all telling your spouse to get approval for this purchase? I need a strategy.
| 2 | 2025-03-31T18:50:04 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joau3g
| false | null |
t3_1joau3g
|
/r/LocalLLaMA/comments/1joau3g/what_are_yall_telling_your_spouse_to_get_approval/
| false | false | 2 |
{'enabled': True, 'images': [{'id': '0UYXSdsCT4bqXLan6XN3pXezzfpI3VRxKMbqYdhxS08', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?width=108&crop=smart&auto=webp&s=a55edf9938e491975eefcdf2c075af9b8d8e1a79', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?width=216&crop=smart&auto=webp&s=d1a06abaec7a4f3564f5cf75e54a9d4ec039a7e0', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?width=320&crop=smart&auto=webp&s=8d44bf68302c7289d2e36e52c7faf40f4452dfb7', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?width=640&crop=smart&auto=webp&s=ee37b1a755698ade4f108f1eb39779483a54db34', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?width=960&crop=smart&auto=webp&s=fb2a2bc5b5b60da16fe44055981f9c07b3ad16b4', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/qdx82w22n2se1.jpeg?auto=webp&s=dd576c4d0f3922d7bd6190af5d2caf05c59c09f6', 'width': 1024}, 'variants': {}}]}
|
|||
Benchmark: Dual-GPU boosts speed, despire all common internet wisdom. 2x RTX 5090 > 1x H100, 2x RTX 4070 > 1x RTX 4090 for QwQ-32B-AWQ. And the RTX 6000 Ada is overpriced.
| 161 |
After yesterday's tests, I got the suggestion to test AWQ quants. And all over the internet I had repeatedly heard that dual-GPU setups won't help because they would not increase sequential speed. But the thing is: With vLLM, dual-GPU setups work anyway. I guess nobody told them ;)
In this benchmark set, the Time To First Token was below 0.1s in all cases, so I'm just going to ignore that. This race is all about the Output Tokens Per Second. And let's be honest, especially with a reasoning model like QwQ, those 4000 tokens of internal monologue is what we are waiting for and skipping the wait is all we care about. And, BTW, just like with my last benchmarking set, I am looking purely at 1-user setups here.
To nobody's surprise, the H100 80GB HBM3 again makes for great inference card with 78 OT/s. And the RTX 5090 is a beast with 65 OT/s, although it took me almost a day to get vLLM, flashInfer, and Nccl compiled just right for it to run stable enough to survive a 30 minute benchmark ... Still, the 5090 delivers 83% of a H100 at 10% the price.
Where things get surprising again is that 2x RTX 4070 TI SUPER actually outperform a RTX 4090 with 46 vs 43 OT/s. In line with that, 2x RTX 4080 also do well with 52 OT/s and they reach 80% of a 5090. My old RTX 3090 TI is also still very pleasant to use at 40 OT/s - which is a respectable 61% of the speed a shiny new 5090 would deliver.
The pricey RTX 6000 Ada completely disappoints with 42 OT/s, so it's only marginally faster than the 3090 TI and way behind a dual-4070 setup.
And what's truly cool is to see how well the 5090 can use additional RAM for speeding up the attention kernels. That's why 2x RTX 5090 outperforms even the mighty H100 by a small margin. That's 30,000€ performance for 5,718€.
Here's the new result table: [https://github.com/DeutscheKI/llm-performance-tests#qwq-32b-awq](https://github.com/DeutscheKI/llm-performance-tests#qwq-32b-awq)
| 2025-03-31T19:12:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jobe0u/benchmark_dualgpu_boosts_speed_despire_all_common/
|
fxtentacle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jobe0u
| false | null |
t3_1jobe0u
|
/r/LocalLLaMA/comments/1jobe0u/benchmark_dualgpu_boosts_speed_despire_all_common/
| false | false |
self
| 161 |
{'enabled': False, 'images': [{'id': 'wp8-7z19_osmAjOyEmpZFMXfIQZF88wfOoUxDVLZaMs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=108&crop=smart&auto=webp&s=db53c9e9a93bcceccd2f2f8ffd6cf00c9f0b1b97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=216&crop=smart&auto=webp&s=a2c295e336ee5dab4c8063f676684c4204b5fc4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=320&crop=smart&auto=webp&s=ea18fcb9855e7a1a08fb836f49cc6079ed963d66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=640&crop=smart&auto=webp&s=b759d32e0158c08560e5feda000cd0c006ddb99c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=960&crop=smart&auto=webp&s=8ba1dccf69716112eaa6f83a9e3e78b4f05013ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?width=1080&crop=smart&auto=webp&s=5319f565b6d74d691a359e3cd129391caa184ef7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hVW5lE-bA8MJGALjZRRKdUfzRwj8h3qgo-FjxK-JdGQ.jpg?auto=webp&s=7a6070d5e48753b7745883ae1ff8ca401e3c4837', 'width': 1200}, 'variants': {}}]}
|
Promox or Native Ubuntu
| 3 |
I've just bought a new machine with 2 NVIDIA 3090 to run Llama.
I want to get advise if it is worth to use Promox or I will get most of the hardware just installing an Ubuntu.
| 2025-03-31T19:32:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jobvd8/promox_or_native_ubuntu/
|
pipaman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jobvd8
| false | null |
t3_1jobvd8
|
/r/LocalLLaMA/comments/1jobvd8/promox_or_native_ubuntu/
| false | false |
self
| 3 | null |
OpenAI is open-sourcing a model soon
| 355 |
OpenAI is taking feedback for open source model. They will probably release o3-mini based on a poll by Sam Altman in February. [https://x.com/sama/status/1891667332105109653](https://x.com/sama/status/1891667332105109653)
| 2025-03-31T19:36:01 |
https://openai.com/open-model-feedback/
|
MysteriousPayment536
|
openai.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jobybk
| false | null |
t3_1jobybk
|
/r/LocalLLaMA/comments/1jobybk/openai_is_opensourcing_a_model_soon/
| false | false |
default
| 355 | null |
Using LLMs to efficiently to breakdown features, perform / refine backlogs with multiple data sources ?
| 6 |
Hey everyone!
I'm currently diving into workflows to break down features into different components, create a good backlog, and refine it whenever needed. I have a set of requirements detailing how functions or features should behave.
My sources of data include Confluence pages, Jira tickets, and [Draw.io](http://Draw.io) diagrams, so I'm dealing with multiple data silos. Additionally, I sometimes refer to code from previous projects.
Right now, I convert Jira and Confluence pages into markdown format and use Git ingest to dump code into markdown files. My ultimate goal is to use these data silos to break down features and create better backlogs, and eventually have some kind of assistant to help me refine and write user stories more efficiently.
What would you recommend for this? What have your experiences been? How are you leveraging LLMs , workflows or agentic setup to tackle such problems ?
Thanks in advance!
| 2025-03-31T19:38:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1joc0vn/using_llms_to_efficiently_to_breakdown_features/
|
Fluid-Beyond3878
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joc0vn
| false | null |
t3_1joc0vn
|
/r/LocalLLaMA/comments/1joc0vn/using_llms_to_efficiently_to_breakdown_features/
| false | false |
self
| 6 | null |
DeepSeek-V3-0324 (685B parameters) running on Apple M3 Ultra at 20 tokens/s using Unsloth 2.71-bit Dynamic GGUF
| 16 |
[Vaibhav ](https://x.com/reach_vb/status/1906397921831977303)from Hugging Face ran our 2.71-bit Dynamic GGUF for DeepSeek-V3-0324 which is 250GB in size. He said he get \~16-20 tok/ sec - without any optimizations and can easily squeeze >25% more. Pretty cool!
DeepSeek-V3-0323 Dynamic GGUF: [https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF)
Step 1: *brew install llama.cpp*
Step 2: *llama-cli -hf unsloth/DeepSeek-V3-0324-GGUF:Q2\_K\_XL*
You can also read our detailed Tutorial which includes official settings including temperature min\_p values etc.: [https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-v3-0324-locally](https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-v3-0324-locally)
| 2025-03-31T20:02:36 |
https://v.redd.it/qzvbbg5ty2se1
|
yoracale
|
/r/LocalLLaMA/comments/1jocmb0/deepseekv30324_685b_parameters_running_on_apple/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jocmb0
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qzvbbg5ty2se1/DASHPlaylist.mpd?a=1746172964%2CN2U2YjRlZGZkNTQ5YmYyZDY2YjNkYTQ0YWRiNWZkOTBmNjY0ZThiYTE4YThmY2IwOTNlNWI4NGE4ZGRjYmRlMA%3D%3D&v=1&f=sd', 'duration': 300, 'fallback_url': 'https://v.redd.it/qzvbbg5ty2se1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 542, 'hls_url': 'https://v.redd.it/qzvbbg5ty2se1/HLSPlaylist.m3u8?a=1746172964%2CYThmZmY5OTdlMWI1ZGU2OTA3NmYzYzFkZjg4MmYwYWY2NzVlOGNmNTc3ZWI5NDFlNDRhZTE4NmI5MWE5MDg4Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qzvbbg5ty2se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1jocmb0
|
/r/LocalLLaMA/comments/1jocmb0/deepseekv30324_685b_parameters_running_on_apple/
| false | false | 16 |
{'enabled': False, 'images': [{'id': 'M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=108&crop=smart&format=pjpg&auto=webp&s=76ef69f01e627ca23fe9cb98af94d5c084786629', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=216&crop=smart&format=pjpg&auto=webp&s=f35d867156dfa81b047e5297f63e9d9505ed8777', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=320&crop=smart&format=pjpg&auto=webp&s=1a69ef9c93deeda6b7e7aeb8cf8785ae2044408f', 'width': 320}, {'height': 270, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=640&crop=smart&format=pjpg&auto=webp&s=010dc43b2d47158cd98048af52e2ff018e139fd1', 'width': 640}, {'height': 406, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=960&crop=smart&format=pjpg&auto=webp&s=33025ee76b57d570d44689b388ec6da1e261bd06', 'width': 960}, {'height': 456, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c6e69b0e60e5d9cfb208b63f4a8c846b1c3ed938', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/M2U0ZGFzNHR5MnNlMYKRYhB8lt1PVVOOFfT-igjij6swWr8VcnmifgTqsoT0.png?format=pjpg&auto=webp&s=db547058383bf9b62068c7332625ca48ad216c5c', 'width': 1702}, 'variants': {}}]}
|
|
Open-weight Reasoning Model form OpenAI
| 1 |
[removed]
| 2025-03-31T20:05:57 |
Warm-Cartoonist-9957
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jocp9c
| false | null |
t3_1jocp9c
|
/r/LocalLLaMA/comments/1jocp9c/openweight_reasoning_model_form_openai/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'uUpBXS-2qi9lwtRhXv-j_cIVbc0rj7YdGLlGa4l8ko4', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ka0lqqfd03se1.png?width=108&crop=smart&auto=webp&s=d2b73fae0b3e46c4123e70eaf740945593a7074d', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/ka0lqqfd03se1.png?width=216&crop=smart&auto=webp&s=2391c4cefb7d911fc3e4f67b7a0f386148d94f90', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/ka0lqqfd03se1.png?width=320&crop=smart&auto=webp&s=14aff94fcfb8d787e637f6ebb996b64a349ce80a', 'width': 320}], 'source': {'height': 275, 'url': 'https://preview.redd.it/ka0lqqfd03se1.png?auto=webp&s=ca13c04a9de607b9d0bca2176b5c33971c90eaf5', 'width': 591}, 'variants': {}}]}
|
||
I Built a Free, Local AI Dictation Tool for Windows That Types Anywhere - OmniDictate
| 1 |
[removed]
| 2025-03-31T20:14:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jocx20/i_built_a_free_local_ai_dictation_tool_for/
|
beelzebub666__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jocx20
| false | null |
t3_1jocx20
|
/r/LocalLLaMA/comments/1jocx20/i_built_a_free_local_ai_dictation_tool_for/
| false | false | 1 | null |
|
OpenHands-LM 32B - 37.2% verified resolve rate on SWE-Bench Verified
| 42 |
All Hands (Creator of OpenHands) released a 32B model that outperforms much larger models when using their software.
The model is research preview so YMMV , but seems quite solid.
Qwen 2.5 0.5B and 1.5B seems to work nicely as draft models with this model (I still need to test in OpenHands but worked nice with the model on lmstudio).
Link to the model: [https://huggingface.co/all-hands/openhands-lm-32b-v0.1](https://huggingface.co/all-hands/openhands-lm-32b-v0.1)
| 2025-03-31T20:17:06 |
https://www.all-hands.dev/blog/introducing-openhands-lm-32b----a-strong-open-coding-agent-model
|
das_rdsm
|
all-hands.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1jocz51
| false | null |
t3_1jocz51
|
/r/LocalLLaMA/comments/1jocz51/openhandslm_32b_372_verified_resolve_rate_on/
| false | false |
default
| 42 | null |
LibreChat info to Help setup and add keyboard shortcuts
| 2 |
Hey all, some info for librechat if you use it. #1 just for people having trouble setting it up without bloated docker.
1. [ here is a pretty decent breakdown of installing librechat without docker (scroll to last message in ChatGPT conversation, and copy and past into admin powershell. ](https://chatgpt.com/share/67eaf7ed-a114-8010-975b-6876fa21191a)
2 [AutoHotkey script to start and stop LibreChat (running from node without Docker) from system tray and hide the powershell window and check if you have latest version installed. ](https://github.com/danny-avila/LibreChat/discussions/6589)
3. [TamperMonkey script to add some keyboard shortcuts and calculate cost for conversation (reload to update calculation) ](https://greasyfork.org/en/scripts/531081-librechat-shortcuts)
| 2025-03-31T20:17:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1joczb5/librechat_info_to_help_setup_and_add_keyboard/
|
FreeTacoInMyOveralls
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joczb5
| false | null |
t3_1joczb5
|
/r/LocalLLaMA/comments/1joczb5/librechat_info_to_help_setup_and_add_keyboard/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'Nyuu7POyhy6govfJ1dcuznEdiKSvWYvY75CrbkKZk54', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=108&crop=smart&auto=webp&s=1c9fdd18b399712019363db42aafb980b94bd314', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=216&crop=smart&auto=webp&s=899c4a23b4ceac10e86c1f39517d489870146375', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=320&crop=smart&auto=webp&s=f0fbf30be58ec54707a24cb4ac47d68af24442f7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=640&crop=smart&auto=webp&s=e590319308efb90d50448579fb76b003782dec8c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=960&crop=smart&auto=webp&s=e25bf929190c7aca5bc237df824850c31f043113', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?width=1080&crop=smart&auto=webp&s=bb8c7e2b754942d06a77b1979c63552b76523e40', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/FFgFDVf9yDNjhoip2yXpNCPko82IyWUZ4rR9l5sQd1Q.jpg?auto=webp&s=e3bf69e1a2674fdfdf6fdb17b1bc4de5488bd3eb', 'width': 1600}, 'variants': {}}]}
|
My first experiment with local LLM. He's annoying AF!
| 1 |
[removed]
| 2025-03-31T20:24:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jod5m9/my_first_experiment_with_local_llm_hes_annoying_af/
|
lalitjindal885
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jod5m9
| false | null |
t3_1jod5m9
|
/r/LocalLLaMA/comments/1jod5m9/my_first_experiment_with_local_llm_hes_annoying_af/
| false | false |
self
| 1 | null |
I made a (free) Chrome extension that uses AI to summarize Terms of Service pages
| 18 | 2025-03-31T20:27:24 |
https://chromewebstore.google.com/detail/tldr-terms/efecponachaghpgeicgkmhjohimlihah
|
Fhantop
|
chromewebstore.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jod848
| false | null |
t3_1jod848
|
/r/LocalLLaMA/comments/1jod848/i_made_a_free_chrome_extension_that_uses_ai_to/
| false | false | 18 |
{'enabled': False, 'images': [{'id': 'H6gBWe30V9-EkRoKntPQQRiUY_5Ssh3LGq1O3xLzuCM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0ag64RnxSE-M_f4cXi0BtDbizOt8oXodW071bcejVtM.jpg?width=108&crop=smart&auto=webp&s=58edd960c533c90e9f627c6feaeda29ff357af47', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/0ag64RnxSE-M_f4cXi0BtDbizOt8oXodW071bcejVtM.jpg?auto=webp&s=6360607c401166ca13b5dfd725c7bfaf09a8ca86', 'width': 128}, 'variants': {}}]}
|
||
What is the biggest out-of-box uncensored/abliterated model out there
| 1 |
[removed]
| 2025-03-31T20:28:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jod943/what_is_the_biggest_outofbox/
|
dearonesama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jod943
| false | null |
t3_1jod943
|
/r/LocalLLaMA/comments/1jod943/what_is_the_biggest_outofbox/
| false | false |
self
| 1 | null |
Orpheus TTS Local WebUI: Your Personal Text-to-Speech Studio, Gradio UI, Supports Emotive tags.
| 77 |
* 🎧 High-quality Text-to-Speech using the Orpheus TTS model
* 💻 Completely standalone - no external services or API keys needed
* 🔊 Multiple voice options (tara, leah, jess, leo, dan, mia, zac, zoe)
* 💾 Save audio to WAV files
* 🎨 Modern Gradio web interface
* 🔧 Adjustable generation parameters (temperature, top\_p, repetition penalty)
* Supports emotive tags `<laugh>`, `<chuckle>`, `<sigh>`, `<cough>`, `<sniffle>`, `<groan>`, `<yawn>`, `<gasp>`.
[https://github.com/akashjss/orpheus-tts-local-webui](https://github.com/akashjss/orpheus-tts-local-webui)
Audio Sample [https://voipnuggets.wordpress.com/wp-content/uploads/2025/03/tmpxxe176lm-1.wav](https://voipnuggets.wordpress.com/wp-content/uploads/2025/03/tmpxxe176lm-1.wav)
| 2025-03-31T20:31:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jodbgl/orpheus_tts_local_webui_your_personal/
|
akashjss
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jodbgl
| false | null |
t3_1jodbgl
|
/r/LocalLLaMA/comments/1jodbgl/orpheus_tts_local_webui_your_personal/
| false | false |
self
| 77 | null |
Ghibli style - can you generate it with Open AI api to avoid rate limit?
| 1 |
[removed]
| 2025-03-31T20:37:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jodgn9/ghibli_style_can_you_generate_it_with_open_ai_api/
|
OwenWatts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jodgn9
| false | null |
t3_1jodgn9
|
/r/LocalLLaMA/comments/1jodgn9/ghibli_style_can_you_generate_it_with_open_ai_api/
| false | false |
self
| 1 | null |
Another coding model, Achieves strong performance on software engineering tasks, including 37.2% resolve rate on SWE-Bench Verified.
| 92 | 2025-03-31T20:49:12 |
https://huggingface.co/all-hands/openhands-lm-32b-v0.1
|
Ornery_Local_6814
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jodrcx
| false | null |
t3_1jodrcx
|
/r/LocalLLaMA/comments/1jodrcx/another_coding_model_achieves_strong_performance/
| false | false | 92 |
{'enabled': False, 'images': [{'id': 'dYYd6WQf2OMYv4USmkEZax0k8gVSCCDH7cGy99NsAIA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=108&crop=smart&auto=webp&s=63abace822581e1e6cc336a1cd37c3864f9bf678', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=216&crop=smart&auto=webp&s=417847229009ec555dc4195668702622df28bc35', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=320&crop=smart&auto=webp&s=3fbecf3527ee133e81915815cac55f6cb42e5435', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=640&crop=smart&auto=webp&s=3b693b9df3a16a1584b89663d17220b1fe18b36a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=960&crop=smart&auto=webp&s=68497d7c78762e3948abf7f0f67917184dd523a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=1080&crop=smart&auto=webp&s=d60028199fb136c8e571a9eca9c3b108c182de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?auto=webp&s=d447de0512ef66cc238b579ec35b2c65a7893a4d', 'width': 1200}, 'variants': {}}]}
|
||
Has someone tried the new ChatGPT-4o (2025-03-27) on anything else than images?
| 0 |
I have now looked for a while through LocalLLaMA and Twitter on experiences besides image generation for the new ChatGPT-4o model. I hardly found a mention and I am really wondering why. Maybe because they only really announced the image-part?
I am explicitly referencing [this one](https://help.openai.com/en/articles/6825453-chatgpt-release-notes#h_10dcfa2a17) not the 4o model that most likely most people are still using (e.g. GitHub Copilot).
What amazes me is that it skyrocketed in my DevQualityEval benchmark to the #1 position. You can think of the benchmark/benchmarks what you but it reflects pretty well what i want in a model for daily development work. I am currently pretty committed to test-driving Gemma 3 27B but I was wondering if anybody else switched for a day or two, and could share experience?
To give some context why i am interested, this is my summary for the benchmark results and i add some graphs:
* 🏁 2025-03-27 (new) (90.96%) beats 2024-11-20 (old) (84.09%) by a wide margin (+6.87) which makes it the new king 👑 of code generation in DevQualityEval v1.0
* 🐕🦺 With better context new (94.20%) slightly improves over old (91.89%: +2.31): only Anthropic’s Claude 3.7 Sonnet (2025-02-19) has an edge (95.03%)
* ⚙️ Main reason (as is for every new model lately) is the big improvement in compilable responses (+5.15%)
* 🗣️ Both are are equally chatty but excess chattiness improved (1.36% -> 1.27%)
* ⛰️ Consistency and reliable in its output have improved greatly as well (2.31% -> 1.33%)
* 🦾 Request/response/retry-rate is as always with OpenAI: perfect
The new model is almost better in every way, but there are some regressions 😱
* 2025-03-27 is almost better in every task: writing tests (now the best model!), transpiling (only o3-mini is better), slightly better in migrating (others are also better) BUT in code repair… old and new are already perfect
* 2025-03-27 is now the best LLM for Go (basically perfect!) … AND … Java!
* However, there is a regression in Ruby: going from 95.47% to 93.94% (-1.53%)
https://preview.redd.it/q2ucju9f83se1.png?width=3050&format=png&auto=webp&s=591de6228d7f6bdffdbeb3968ba5cf7585eadd0a
https://preview.redd.it/b1kcgieg83se1.png?width=3050&format=png&auto=webp&s=5cdbfd4f7b5d51076638149d4204712d7d002b87
| 2025-03-31T20:51:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jodtml/has_someone_tried_the_new_chatgpt4o_20250327_on/
|
zimmski
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jodtml
| false | null |
t3_1jodtml
|
/r/LocalLLaMA/comments/1jodtml/has_someone_tried_the_new_chatgpt4o_20250327_on/
| false | false | 0 | null |
|
Augento: Fine-tune your agents with Deepseek R1-like reinforcement learning
| 1 | 2025-03-31T20:59:17 |
https://news.ycombinator.com/item?id=43537505
|
Zollerboy1
|
news.ycombinator.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1joe010
| false | null |
t3_1joe010
|
/r/LocalLLaMA/comments/1joe010/augento_finetune_your_agents_with_deepseek_r1like/
| false | false |
default
| 1 | null |
|
Openhands lm 32b v0.1 coding model out now. MIT Licence. Stats close to Deepseek V3 0324 on SWE-Bench
| 1 |
[removed]
| 2025-03-31T21:28:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1joepgq/openhands_lm_32b_v01_coding_model_out_now_mit/
|
maverick-tr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joepgq
| false | null |
t3_1joepgq
|
/r/LocalLLaMA/comments/1joepgq/openhands_lm_32b_v01_coding_model_out_now_mit/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'dYYd6WQf2OMYv4USmkEZax0k8gVSCCDH7cGy99NsAIA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=108&crop=smart&auto=webp&s=63abace822581e1e6cc336a1cd37c3864f9bf678', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=216&crop=smart&auto=webp&s=417847229009ec555dc4195668702622df28bc35', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=320&crop=smart&auto=webp&s=3fbecf3527ee133e81915815cac55f6cb42e5435', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=640&crop=smart&auto=webp&s=3b693b9df3a16a1584b89663d17220b1fe18b36a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=960&crop=smart&auto=webp&s=68497d7c78762e3948abf7f0f67917184dd523a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?width=1080&crop=smart&auto=webp&s=d60028199fb136c8e571a9eca9c3b108c182de24', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2wwsANQslIq5kOZIF2w1yy3GroEQnGK-ceJDzy3yNOI.jpg?auto=webp&s=d447de0512ef66cc238b579ec35b2c65a7893a4d', 'width': 1200}, 'variants': {}}]}
|
Can one RTX 3090 run Mistral-Small-24B or equivalent model with long prompt (~10k tokens) in a reasonable tps?
| 11 |
I am thinking of buying an RTX 3090 to build my local llm. So far I am very satisfied with Mistral-Small-24B, which is \~14 GB size so the 24GB vram seems can perfectly handle. But I plan to use it to help me reading and analyzing long articles (online webpage articles or local pdfs). so I am not sure how fast a 3090 could respond, if I give it a 10k tokens. And do you have any suggestions?
| 2025-03-31T21:35:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1joevea/can_one_rtx_3090_run_mistralsmall24b_or/
|
rumboll
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joevea
| false | null |
t3_1joevea
|
/r/LocalLLaMA/comments/1joevea/can_one_rtx_3090_run_mistralsmall24b_or/
| false | false |
self
| 11 | null |
A very fast, cheap, and performant sparse retrieval system
| 1 |
[removed]
| 2025-03-31T21:56:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofdki/a_very_fast_cheap_and_performant_sparse_retrieval/
|
prateekvellala
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofdki
| false | null |
t3_1jofdki
|
/r/LocalLLaMA/comments/1jofdki/a_very_fast_cheap_and_performant_sparse_retrieval/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'JV4jgu8S3jukAvvYPZaKN20mX_pMp3YFDIg3llh8DDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=108&crop=smart&auto=webp&s=33c4464252b0b7d9c99d21aa034aebf5afb06320', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=216&crop=smart&auto=webp&s=2b57f592e97365b1fd240ff3ad33c67052f86cd5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=320&crop=smart&auto=webp&s=89bef738dae768818b2dd10119c4a6d2a124ef0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=640&crop=smart&auto=webp&s=9bd714e9e3947c766febe3438f150241351fdd3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=960&crop=smart&auto=webp&s=a63eb247a41da99ffdf4712e0e52225e49b4c060', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?width=1080&crop=smart&auto=webp&s=cfc2930e94543a8e00c7da59cccdbc8deeacde64', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/viVTrFoKSl7sDXigCe9A9fiLaSLnBobCWDsUWIGOydU.jpg?auto=webp&s=10ac7be007f077b532a6cf02f604d38e110ecea9', 'width': 1200}, 'variants': {}}]}
|
A very fast, cheap, and performant sparse retrieval system
| 1 |
[removed]
| 2025-03-31T21:58:57 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1joffan
| false | null |
t3_1joffan
|
/r/LocalLLaMA/comments/1joffan/a_very_fast_cheap_and_performant_sparse_retrieval/
| false | false |
default
| 1 | null |
||
Gpt 5 is here and it's open source
| 0 |
https://huggingface.co/collections/yandex/yandexgpt-5-lite-8b-67ea2fce9e55bca949318af6
| 2025-03-31T22:08:10 |
mlon_eusk-_-
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofnai
| false | null |
t3_1jofnai
|
/r/LocalLLaMA/comments/1jofnai/gpt_5_is_here_and_its_open_source/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'wPGKai5YysbRDX8lERMJ9AQuxTPBZQUbq2eGPTva8tM', 'resolutions': [{'height': 153, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=108&crop=smart&auto=webp&s=9ac0b4fd2e80b09d74c8acecd4254fcdb989ae23', 'width': 108}, {'height': 307, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=216&crop=smart&auto=webp&s=458f218068fcbe704da1694ee6c98942f1d1d872', 'width': 216}, {'height': 454, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=320&crop=smart&auto=webp&s=a4868d775f032705a5ca802d6cd30e64398b0596', 'width': 320}, {'height': 909, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=640&crop=smart&auto=webp&s=3136e7f07a3e204360079e618cdef3db12988e85', 'width': 640}, {'height': 1364, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=960&crop=smart&auto=webp&s=6c6676f2915f0de9cfa308a2416bae4b5a791b1f', 'width': 960}, {'height': 1535, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?width=1080&crop=smart&auto=webp&s=63d67af0e89375220c5c86532fb5a8dbbb457470', 'width': 1080}], 'source': {'height': 1535, 'url': 'https://preview.redd.it/7itzy6fem3se1.png?auto=webp&s=a1e230fdad4f5f19cec69f7f988034c3e6fb9d9f', 'width': 1080}, 'variants': {}}]}
|
||
Local llm apps to plug Google api into (for gemini 2.5 etc?)
| 1 |
I find ai studio can be quite laggy and problematic, but absolutely love the model the context size and everything, and even the features with branching and anything.
Are there any local front ends where I can just plug in the api and avoid the bugginess/lag of the ai studio please?
| 2025-03-31T22:08:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofnu4/local_llm_apps_to_plug_google_api_into_for_gemini/
|
the_doorstopper
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofnu4
| false | null |
t3_1jofnu4
|
/r/LocalLLaMA/comments/1jofnu4/local_llm_apps_to_plug_google_api_into_for_gemini/
| false | false |
self
| 1 | null |
Built a YC startup's product in a 24HR hackathon - check the demo video below..
| 1 |
[removed]
| 2025-03-31T22:10:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofozu/built_a_yc_startups_product_in_a_24hr_hackathon/
|
hjofficial
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofozu
| false | null |
t3_1jofozu
|
/r/LocalLLaMA/comments/1jofozu/built_a_yc_startups_product_in_a_24hr_hackathon/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Kr2Bs_JSs_yPuahWGrGBnWnixr5pahN3N_9nnZ7GNgU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=108&crop=smart&auto=webp&s=5cc5fedbbf0951690c7da207130b43a5b81db534', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=216&crop=smart&auto=webp&s=8b4e8b06719ea95a9010c7871560fe7514491679', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=320&crop=smart&auto=webp&s=6ed8365445f8042aadea5a82c2c6701fa435c1d2', 'width': 320}], 'source': {'height': 282, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?auto=webp&s=e1b410f97b7b5312c3645892b57c6eb3ddb5a3ff', 'width': 500}, 'variants': {}}]}
|
A fast, cheap, and performant sparse retrieval system
| 1 |
[removed]
| 2025-03-31T22:11:30 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofpz8
| false | null |
t3_1jofpz8
|
/r/LocalLLaMA/comments/1jofpz8/a_fast_cheap_and_performant_sparse_retrieval/
| false | false |
default
| 1 | null |
||
Built a YC startup's product in a 24HR hackathon - check the demo video below..
| 1 |
[removed]
| 2025-03-31T22:12:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofr64/built_a_yc_startups_product_in_a_24hr_hackathon/
|
hjofficial
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofr64
| false | null |
t3_1jofr64
|
/r/LocalLLaMA/comments/1jofr64/built_a_yc_startups_product_in_a_24hr_hackathon/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Kr2Bs_JSs_yPuahWGrGBnWnixr5pahN3N_9nnZ7GNgU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=108&crop=smart&auto=webp&s=5cc5fedbbf0951690c7da207130b43a5b81db534', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=216&crop=smart&auto=webp&s=8b4e8b06719ea95a9010c7871560fe7514491679', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?width=320&crop=smart&auto=webp&s=6ed8365445f8042aadea5a82c2c6701fa435c1d2', 'width': 320}], 'source': {'height': 282, 'url': 'https://external-preview.redd.it/mI51NQIyiUxA0KxTSrR7AVOfr1Bn40bWgmsTb9Hc22E.jpg?auto=webp&s=e1b410f97b7b5312c3645892b57c6eb3ddb5a3ff', 'width': 500}, 'variants': {}}]}
|
|
How good unsloth fine tuned models can actually get
| 20 |
I’ve been reading a bit about Unsloth fine-tuning and wondering how good these models can actually get.
I know a lot depends on the dataset, but before I go too deep into yet another rabbit hole, I want to get a sense of what’s realistically achievable—especially when it comes to fine-tuning a model to match my writing style. Is it possible to get decent results without massive datasets and expensive hardware?
I’ve tried searching for examples of fine-tuned Unsloth models, but all I find are tutorials—nothing I can actually try to see what kind of results are possible.
For those who have worked with Unsloth fine-tuning, what’s been your experience? I’m not chasing a specific use case, just experimenting, but I don’t want to sink a ton of time into this only to find out you really need a 32B+ model and a very specific setup for it to be worthwhile.
How big of a dataset and model would I actually need to get reasonable results? Would love to hear from anyone who’s tried.
| 2025-03-31T22:17:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofuyc/how_good_unsloth_fine_tuned_models_can_actually/
|
CautiousSand
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofuyc
| false | null |
t3_1jofuyc
|
/r/LocalLLaMA/comments/1jofuyc/how_good_unsloth_fine_tuned_models_can_actually/
| false | false |
self
| 20 | null |
Built a project inspired by a YC startup in a 24-hour hackathon – sharing our demo
| 1 |
[removed]
| 2025-03-31T22:18:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofw6z/built_a_project_inspired_by_a_yc_startup_in_a/
|
hjofficial
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofw6z
| false | null |
t3_1jofw6z
|
/r/LocalLLaMA/comments/1jofw6z/built_a_project_inspired_by_a_yc_startup_in_a/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.