title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to implement citations in Web Search
| 6 |
I'm implementing web search in my app (which is like ChatGPT Desktop, but with local mode and other providers). I've got a V1 working through Tavily and plan to layer in other web search providers (SearXNG, Google, Jina, etc.) over time. But there's one point I'm stuck on:
**How do providers like Perplexity or OpenAI add the 'citations' at the relevant parts of the generated responses**? I can *ask* the model to do this by appending something to the end of my prompt (i.e. "add citations in your response"), but that seems to produce mixed results- stochastic at best. Does anyone know a more deterministic, programmatic way to go about this?
Code is [here](https://github.com/synth-inc/onit/pull/176/files).
| 2025-04-03T17:24:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqog3h/how_to_implement_citations_in_web_search/
|
tilmx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqog3h
| false | null |
t3_1jqog3h
|
/r/LocalLLaMA/comments/1jqog3h/how_to_implement_citations_in_web_search/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': 'KBo9ORh3lTIJ9NEGQmmCT2FfQ21GF68eGGQ71t2P6T4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=108&crop=smart&auto=webp&s=de4f06e9bb690397f4bbcb48a9fd3beee6b9f544', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=216&crop=smart&auto=webp&s=dbfb8238483c96119a317c6f4ce5edafe3a4cd76', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=320&crop=smart&auto=webp&s=c6ed96c08f0c69830ac232579b5ca1c9b0112c51', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?auto=webp&s=dcfdb108957f99d0de732ff38e921c12e1034f7f', 'width': 400}, 'variants': {}}]}
|
Inference gemma 3 in browser with webLLM
| 3 |
I was trying to run WebLLM in my nextjs app to inference a light weight LLM model like mlc-ai/gemma-3-1b-it-q4f16_1-MLC I get model not found in consol log but when I use the model in their nextjs example setup I see model being downloaded in browser to cache in indexdb
sample model Llama-3.1-8B-Instruct-q4f32_1-MLC
am I missing something?
| 2025-04-03T17:34:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqopk8/inference_gemma_3_in_browser_with_webllm/
|
Applesaw69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqopk8
| false | null |
t3_1jqopk8
|
/r/LocalLLaMA/comments/1jqopk8/inference_gemma_3_in_browser_with_webllm/
| false | false |
self
| 3 | null |
Daniel Kokotajlo (ex-OpenaI) wrote a detailed scenario for how AGI might get built
| 0 | 2025-04-03T17:43:52 |
http://ai-2027.com/
|
Leonhard27
|
ai-2027.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqoyny
| false | null |
t3_1jqoyny
|
/r/LocalLLaMA/comments/1jqoyny/daniel_kokotajlo_exopenai_wrote_a_detailed/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'X3rIFb4C3QGl6qArlMWMfwDK9xMzTIduWqsuW831q04', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=108&crop=smart&auto=webp&s=b77578e13b43a852360e2c1845e9f74c13e91862', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=216&crop=smart&auto=webp&s=e0743f7a446f51aa4909e4e60147fa2ff0e8a6a9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=320&crop=smart&auto=webp&s=ade46b4b2cc9ccd87693e6271ae24bfaa7edd887', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=640&crop=smart&auto=webp&s=70a235f794c02bcdbc2fd2b4599786cb2a7cb6b6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=960&crop=smart&auto=webp&s=ff7f2a26f1a85a5172f7d21f28b36ec7a53e92df', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?width=1080&crop=smart&auto=webp&s=36f1e31109407fb945fa09005a80094016c30e89', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6eiLuF_9EVmcX3YCBRQU2O-LeaYXO6bcCAoBOq4kfQQ.jpg?auto=webp&s=f7fab44e5eeecf52545bff097828ae0ddf3fa2b3', 'width': 1200}, 'variants': {}}]}
|
||
guys I think I'm cooking something 💀💀
| 0 |
**Working on my first programming language using Python**
| 2025-04-03T17:45:46 |
internal-pagal
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqp0j5
| false | null |
t3_1jqp0j5
|
/r/LocalLLaMA/comments/1jqp0j5/guys_i_think_im_cooking_something/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '4PUNpkPy1wgu-9MJ9Wy1iHn2gE5CuwjoZca-1RfStMw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=108&crop=smart&auto=webp&s=aa7b851f6d0c277e48e3721dd418db1dcab2846e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=216&crop=smart&auto=webp&s=8a83d30d231702762db8cf8b9e6cda1124d0833d', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=320&crop=smart&auto=webp&s=e1635eb98fdd548d3d7de006e3b393a099dd1e23', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=640&crop=smart&auto=webp&s=dce8a2396ecdb93ec5e9dc87b8d279193955c924', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=960&crop=smart&auto=webp&s=075f0177e58a2d03703e7b172bfe8d8ed91ec25a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?width=1080&crop=smart&auto=webp&s=a283994b7df46d36419b06c3492f8b21ee083c6b', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/yvuqef22qnse1.png?auto=webp&s=d83b368b9ef47f885e86c41ba27414ccb622608d', 'width': 1366}, 'variants': {}}]}
|
||
Grok 3 system prompt (death penalty update)
| 1 |
[removed]
| 2025-04-03T17:57:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqpb5m/grok_3_system_prompt_death_penalty_update/
|
NoUnderstanding7620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqpb5m
| false | null |
t3_1jqpb5m
|
/r/LocalLLaMA/comments/1jqpb5m/grok_3_system_prompt_death_penalty_update/
| false | false | 1 | null |
|
AI Roleplay That Learns From You—My Thoughts
| 1 |
[removed]
| 2025-04-03T18:09:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqpnob/ai_roleplay_that_learns_from_youmy_thoughts/
|
RoughOwll
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqpnob
| false | null |
t3_1jqpnob
|
/r/LocalLLaMA/comments/1jqpnob/ai_roleplay_that_learns_from_youmy_thoughts/
| false | false |
self
| 1 | null |
Build local AI Agents and RAGs over your docs/sites in minutes now.
| 11 |
Hey r/LocalLLaMA ,
Following up on Rlama – many of you were interested in how quickly you can get a local RAG system running. The key now is the new \*\*Rlama Playground\*\*, our web UI designed to take the guesswork out of configuration.
Building RAG systems often involves juggling models, data sources, chunking parameters, reranking settings, and more. It can get complex fast! The Playground simplifies this dramatically.
The Playground acts as a user-friendly interface to visually configure your entire Rlama RAG setup before you even touch the terminal.
\*\*Here's how you build an AI solution in minutes using it:\*\*
1. \*\*Select Your Model:\*\* Choose any model available via \*\*Ollama\*\* (like llama3, gemma3, mistral) or \*\*Hugging Face\*\* directly in the UI.
2. \*\*Choose Your Data Source:\*\*
\* \*\*Local Folder:\*\* Just provide the path to your documents (./my\\\_project\\\_docs).
\* \*\*Website:\*\* Enter the URL (https://rlama.dev), set crawl depth, concurrency, and even specify paths to exclude (/blog, /archive). You can also leverage sitemaps.
3. \*\*(Optional) Fine-Tune Settings:\*\*
\* \*\*Chunking:\*\* While we offer sensible defaults (Hybrid or Auto), you can easily select different strategies (Semantic, Fixed, Hierarchical), adjust chunk size, and overlap if needed. Tooltips guide you.
\* \*\*Reranking:\*\* Enable/disable reranking (improves relevance), set a score threshold, or even specify a different reranker model – all visually.
4. \*\*Generate Command:\*\* This is the magic button! Based on all your visual selections, the Playground instantly generates the precise rlama CLI command needed to build this exact RAG system.
5. \*\*Copy & Run:\*\*
\* Click "Copy".
\* Paste the generated command into your terminal.
\* Hit Enter. Rlama processes your data and builds the vector index.
6. \*\*Query Your Data:\*\* Once complete (usually seconds to a couple of minutes depending on data size), run rlama run my\\\_website\\\_rag and start asking questions!
\*\*That's it!\*\* The Playground turns potentially complex configuration into a simple point-and-click process, generating the exact command so you can launch your tailored, local AI solution in minutes. No need to memorize flags or manually craft long commands.
It abstracts the complexity while still giving you granular control if you want it.
\*\*Try the Playground yourself:\*\*
\* \*\*Playground/Website:\*\* \[https://rlama.dev/\](https://rlama.dev/)
\* \*\*GitHub:\*\* \[https://github.com/dontizi/rlama\](https://github.com/dontizi/rlama)
Let me know if you have any questions about using the Playground!
| 2025-04-03T18:18:56 |
https://www.youtube.com/watch?v=LJm4E5U5GvQ&t=1s&ab_channel=Dontizi
|
DonTizi
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqpw70
| false |
{'oembed': {'author_name': 'Dontizi', 'author_url': 'https://www.youtube.com/@Dontizi', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/LJm4E5U5GvQ?start=1&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="RLAMA Playground"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/LJm4E5U5GvQ/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'RLAMA Playground', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jqpw70
|
/r/LocalLLaMA/comments/1jqpw70/build_local_ai_agents_and_rags_over_your/
| false | false | 11 |
{'enabled': False, 'images': [{'id': '0Mz341XDI6zcn_WDK_N2hY7BGATEsYsgu7vkap048oo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yRO6a87xXfqB5nY-EMIOnUbobv9XobH0V6QgrfkAv5A.jpg?width=108&crop=smart&auto=webp&s=eeff5beab1315f85dae9562904154c0320e79653', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/yRO6a87xXfqB5nY-EMIOnUbobv9XobH0V6QgrfkAv5A.jpg?width=216&crop=smart&auto=webp&s=1dcd75061295c96d9ad8f917973bee57fe8f0b40', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/yRO6a87xXfqB5nY-EMIOnUbobv9XobH0V6QgrfkAv5A.jpg?width=320&crop=smart&auto=webp&s=4969d1eda1c511814f9d96d4b12dd67fb623f31d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/yRO6a87xXfqB5nY-EMIOnUbobv9XobH0V6QgrfkAv5A.jpg?auto=webp&s=7c9e098e88cdedc9e184df15e74221ca3aa03fef', 'width': 480}, 'variants': {}}]}
|
|
32B open-data reasoning model, trained on curated SFT dataset OpenThoughts2-1M, outperforms DeepSeekR1-32B. Big win for the open source community!
| 1 |
[removed]
| 2025-04-03T18:25:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqq2e5/32b_opendata_reasoning_model_trained_on_curated/
|
Ambitious_Anybody855
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqq2e5
| false | null |
t3_1jqq2e5
|
/r/LocalLLaMA/comments/1jqq2e5/32b_opendata_reasoning_model_trained_on_curated/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'QO81Uk9ejlGsXeJbsXo6iI68DI-diPomiC6YJ9ZJAwQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=108&crop=smart&auto=webp&s=6f81939dd31bc40fa7a68e7ff1c3f6631ee2722b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=216&crop=smart&auto=webp&s=833307d7190193c308cd95c7a07eedf7effff811', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=320&crop=smart&auto=webp&s=61e1edf4e4c01300a567b170bb82ef1dd3e72697', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=640&crop=smart&auto=webp&s=4b5877735c922816d0a53d5691997715ac425d0e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=960&crop=smart&auto=webp&s=ae90618d24aaaff9619f24b5eddb04272a0a91b9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?width=1080&crop=smart&auto=webp&s=d3f36958dc321eab2ff8bad8672a5f235ed60f3b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/h0K-eN-E0_KQgMPVmYQFJ6PHWE28h3cZLA1ejiBOHOU.jpg?auto=webp&s=fd7cd3af0b0bbaae13c0e4a8e80094994e335d8a', 'width': 1200}, 'variants': {}}]}
|
32B open-data reasoning model, trained on curated SFT dataset OpenThoughts2-1M, outperforms DeepSeekR1-32B. Big win for the open source community
| 1 |
[removed]
| 2025-04-03T18:34:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqqbg0/32b_opendata_reasoning_model_trained_on_curated/
|
Ambitious_Anybody855
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqqbg0
| false | null |
t3_1jqqbg0
|
/r/LocalLLaMA/comments/1jqqbg0/32b_opendata_reasoning_model_trained_on_curated/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'aj43hKkRk1iMJq1j9uAZK-q4lDo057-UN0ArZH2D0gE', 'resolutions': [{'height': 31, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=108&crop=smart&auto=webp&s=804381fcc2918fa6de16cfc123ae9f1627559b6d', 'width': 108}, {'height': 63, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=216&crop=smart&auto=webp&s=f1c21c8a886627dd39fa407f6b03d28e80b19cb7', 'width': 216}, {'height': 93, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=320&crop=smart&auto=webp&s=b63ca525d32172e05f427aa78827f604b81ed35e', 'width': 320}, {'height': 187, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=640&crop=smart&auto=webp&s=e9b70e0d57a240d6f0ad365053d7d80fa64320fe', 'width': 640}, {'height': 281, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=960&crop=smart&auto=webp&s=aeb9862b7878db6fd65492b0079a1fb1f0fb4de4', 'width': 960}, {'height': 316, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?width=1080&crop=smart&auto=webp&s=4f206a602c24b84fbd214d9e5616c1f7127fffb3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j_wwEH-a5mJhTteR9UKsB7qnfBhdyCt0YkZnWesXDNc.jpg?auto=webp&s=b93156a8a7059ebb9a7bc471987e46b0058a13f4', 'width': 2047}, 'variants': {}}]}
|
Looking for early testers: Real-time Prompt Injection Protection for GenAI Apps (free trial)
| 1 |
[removed]
| 2025-04-03T18:52:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqqrgo/looking_for_early_testers_realtime_prompt/
|
Opposite-Ear9714
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqqrgo
| false | null |
t3_1jqqrgo
|
/r/LocalLLaMA/comments/1jqqrgo/looking_for_early_testers_realtime_prompt/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'JeW7DA67GAhGdjfV5UzmUtd-J939XcGG1YIuPSyc1Ok', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=108&crop=smart&auto=webp&s=8de75b3c921baa6148d660ab6dd40faded143bc8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=216&crop=smart&auto=webp&s=bd63088d39fdc14556dcfc99a15a27a8ca48723b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=320&crop=smart&auto=webp&s=55cd73ace77e3f696b94122d7973df560ccfaf4b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=640&crop=smart&auto=webp&s=24d5e1e29e8d0548b5da2ab169c77e3f14410ecc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=960&crop=smart&auto=webp&s=84863f2d3686edc242da26a975990311154ebd9e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?width=1080&crop=smart&auto=webp&s=09e1deca4d047ae40926a42ef06207e2a98c9bf1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/M0WVNwTvy5xrR1XLYA_XV6WIulbIXnpuCYeyOpMpTJk.jpg?auto=webp&s=b0f3b2bda75081a6a9c68b1caf842d577dbdeb13', 'width': 1200}, 'variants': {}}]}
|
Any good options for running a local LLM that can analyze a directory of images and summarize them like this? (Gemini 2.5)
| 0 | 2025-04-03T18:59:42 |
LegendOfAB
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqqymj
| false | null |
t3_1jqqymj
|
/r/LocalLLaMA/comments/1jqqymj/any_good_options_for_running_a_local_llm_that_can/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'rO8yEVqAfq2bz76l7otFlywQQHUcCm8IwuH1ZZv1iM0', 'resolutions': [{'height': 175, 'url': 'https://external-preview.redd.it/EuExesgJbNvqH2dskUox8x1D2sbZfxjmkIEoCINd6Xk.png?width=108&crop=smart&auto=webp&s=d76bf423390a210f3e0e0cc10c5c283ff3ba66bb', 'width': 108}, {'height': 351, 'url': 'https://external-preview.redd.it/EuExesgJbNvqH2dskUox8x1D2sbZfxjmkIEoCINd6Xk.png?width=216&crop=smart&auto=webp&s=84fc435c4c82ab32ab99ff08ea286c9b915aeb49', 'width': 216}, {'height': 520, 'url': 'https://external-preview.redd.it/EuExesgJbNvqH2dskUox8x1D2sbZfxjmkIEoCINd6Xk.png?width=320&crop=smart&auto=webp&s=df04349f8ce53fd6edf25081796ab2cbe00dc1e7', 'width': 320}], 'source': {'height': 963, 'url': 'https://external-preview.redd.it/EuExesgJbNvqH2dskUox8x1D2sbZfxjmkIEoCINd6Xk.png?auto=webp&s=4d0ffb674bb55bab6cf77fc739e237497c702c6b', 'width': 592}, 'variants': {}}]}
|
|||
A very interesting mystery model dropped on openrouter
| 1 | 2025-04-03T19:02:48 |
https://openrouter.ai/openrouter/quasar-alpha
|
cpldcpu
|
openrouter.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqr1tr
| false | null |
t3_1jqr1tr
|
/r/LocalLLaMA/comments/1jqr1tr/a_very_interesting_mystery_model_dropped_on/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Z2hUMZU7ReMFyeOLTpukfOJFpvwlU6RCvEnKlCK9ZC0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=108&crop=smart&auto=webp&s=db1e934f26cd82d303a9cb792573c29fb7453139', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=216&crop=smart&auto=webp&s=d207ea0d89a0bd5b349fc2104aead8385f33a8eb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=320&crop=smart&auto=webp&s=eb078ca3590523758dc021b1c4f3c1370395c399', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=640&crop=smart&auto=webp&s=b8666f2747f04948de7ff2a504382d71411f8436', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=960&crop=smart&auto=webp&s=10d0c1332b776616d662739e1fe36edadf7b1d82', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=1080&crop=smart&auto=webp&s=24baaa5cb423b1d6c9156e67f4f59d0e02eebfc8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?auto=webp&s=8ed8b640f73f97426d28d24558cb4313eedb47d6', 'width': 1200}, 'variants': {}}]}
|
||
Best place to check LLM Rankings?
| 8 |
*I only know lmarena*
| 2025-04-03T19:05:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqr48r/best_place_to_check_llm_rankings/
|
Dangerous-Stress732
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqr48r
| false | null |
t3_1jqr48r
|
/r/LocalLLaMA/comments/1jqr48r/best_place_to_check_llm_rankings/
| false | false |
self
| 8 | null |
OASIS: Open-Sourced Social Media Simulator that uses up to 1 million agents & 20+ Rich Interactions
| 1 | 2025-04-03T19:12:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqraqb/oasis_opensourced_social_media_simulator_that/
|
iamnotdeadnuts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqraqb
| false | null |
t3_1jqraqb
|
/r/LocalLLaMA/comments/1jqraqb/oasis_opensourced_social_media_simulator_that/
| false | false | 1 | null |
||
OASIS: Open-Sourced Social Media Simulator that uses up to 1 million agents & 20+ Rich Interactions
| 35 |
Meet Matrix (the social simulation engine for Social Media)
Add any account
Drop a post
Let agents engage
Try tweeting as you are billgates and watch what Elon does
[matrix.eigent.ai/x](http://matrix.eigent.ai/x)
| 2025-04-03T19:24:01 |
iamnotdeadnuts
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqrlo8
| false | null |
t3_1jqrlo8
|
/r/LocalLLaMA/comments/1jqrlo8/oasis_opensourced_social_media_simulator_that/
| false | false | 35 |
{'enabled': True, 'images': [{'id': 'lcQbLVo6AiXmdI4vhJzoHBSlQvPAsOGNtUQbk6sIZts', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=108&crop=smart&auto=webp&s=c0e38af5383b10e1e979d802bb418bf24f497947', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=216&crop=smart&auto=webp&s=2ade5765faf0e07b9d898a9b7a978c2af21a1af5', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=320&crop=smart&auto=webp&s=d5d803044b78cd5d680d29df13e7eaed477f0deb', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=640&crop=smart&auto=webp&s=7c525b8329c9751d8a985c8d5d18b3008a9c8066', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=960&crop=smart&auto=webp&s=11486005baf86c63e464f191f33f64e137c6ea1e', 'width': 960}, {'height': 498, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?width=1080&crop=smart&auto=webp&s=6948a5491053e9cc2a89c746c78742b6c221719c', 'width': 1080}], 'source': {'height': 1479, 'url': 'https://preview.redd.it/knefnw7o7ose1.png?auto=webp&s=830895fd2e3da4b5fcc1c1bd3aa27598d1b453f4', 'width': 3205}, 'variants': {}}]}
|
||
Quasar Alpha on OpenRouter
| 46 |
New "cloaked" model. How do you think what it is?
https://openrouter.ai/openrouter/quasar-alpha
Passes initial vibe check, but not sure about more complex tasks.
| 2025-04-03T19:26:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqrnx6/quasar_alpha_on_openrouter/
|
Everlier
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqrnx6
| false | null |
t3_1jqrnx6
|
/r/LocalLLaMA/comments/1jqrnx6/quasar_alpha_on_openrouter/
| false | false |
self
| 46 |
{'enabled': False, 'images': [{'id': 'Z2hUMZU7ReMFyeOLTpukfOJFpvwlU6RCvEnKlCK9ZC0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=108&crop=smart&auto=webp&s=db1e934f26cd82d303a9cb792573c29fb7453139', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=216&crop=smart&auto=webp&s=d207ea0d89a0bd5b349fc2104aead8385f33a8eb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=320&crop=smart&auto=webp&s=eb078ca3590523758dc021b1c4f3c1370395c399', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=640&crop=smart&auto=webp&s=b8666f2747f04948de7ff2a504382d71411f8436', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=960&crop=smart&auto=webp&s=10d0c1332b776616d662739e1fe36edadf7b1d82', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=1080&crop=smart&auto=webp&s=24baaa5cb423b1d6c9156e67f4f59d0e02eebfc8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?auto=webp&s=8ed8b640f73f97426d28d24558cb4313eedb47d6', 'width': 1200}, 'variants': {}}]}
|
Is there any free uncensored image generator?🤔
| 0 |
I have a low-end laptop, so I can't run an image generator locally. I also don't want to pay because I already have API credits in OpenAI and Anthropic.
| 2025-04-03T19:36:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqrxgq/is_there_any_free_uncensored_image_generator/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqrxgq
| false | null |
t3_1jqrxgq
|
/r/LocalLLaMA/comments/1jqrxgq/is_there_any_free_uncensored_image_generator/
| false | false |
self
| 0 | null |
Discussion: Not Using Local LLMs is wasting Unused Comsumer Hardware!
| 0 |
Hey LocalLLaMA fam! Hot take: if you bought decent hardware in the last 5 years and aren't running local LLMs in the background, you're wasting it! These models run WAY better than most people realize on regular consumer gear.
**Your Hardware is Being Wasted Right Now:**
* Any gaming PC with 16GB+ RAM is sitting idle 90% of the time when it could be running <32B models.
* Even your integrated GPU can handle basic inference!
* M1/M2 Macs are really good because of their shared memory.
**Real Numbers That Will Surprise You:**
* RTX 2080: deepseek-r1:8b hits \~45 tokens/sec
* M4 mac mini: even 32b QWQ run at like \~20 tokens/sec
* Even an old GTX 1060 still manages 8-10 tokens/sec!
I've been building local agents with Observer AI (my open source project) and honestly they really do work!
I know this sounds like crypto mining BS, but super simple agents are genuinely useful! Some I've uploaded recently:
* **German Flashcard Agent:** Generates flashcards with vocabulary it sees on screen while I'm learning German
* **Activity Tracking Agent:** Keeps a log of things I do on my computer (without creepy privacy issues)
I know this isn't for everyone and it won't be like "having a personal assistant," but simple tasks with local inference really do work pretty good! What hardware are you currently underutilizing? Am I wrong here?
| 2025-04-03T20:08:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqss09/discussion_not_using_local_llms_is_wasting_unused/
|
Roy3838
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqss09
| false | null |
t3_1jqss09
|
/r/LocalLLaMA/comments/1jqss09/discussion_not_using_local_llms_is_wasting_unused/
| false | false |
self
| 0 | null |
llama.cpp discussion - Experimenting with custom quants
| 31 | 2025-04-03T20:30:21 |
https://github.com/ggml-org/llama.cpp/discussions/12741
|
Master-Meal-77
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqtcav
| false | null |
t3_1jqtcav
|
/r/LocalLLaMA/comments/1jqtcav/llamacpp_discussion_experimenting_with_custom/
| false | false | 31 |
{'enabled': False, 'images': [{'id': 'y2CoMAtjdSgQupsqkJ4Pybd4QnFnh0Uo4HNL7mKRzGk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=108&crop=smart&auto=webp&s=891a823fece8f4344346f9a9cdd0b815fa2bea0a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=216&crop=smart&auto=webp&s=4f6b986c7c61af71234189dac032688ab22b661a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=320&crop=smart&auto=webp&s=a05b35a7e22234bc65edf6206daedfd9396bbb99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=640&crop=smart&auto=webp&s=e0990f8b73a03b0471a3890665aa7002573c0fda', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=960&crop=smart&auto=webp&s=5701cc6e157c0de4de0155efe39d718135c61508', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?width=1080&crop=smart&auto=webp&s=c5eb9bc44cfcdd281c764033cdfb673957c0d7e4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/46Rep-nLx-3vJHhS_PeWnbGdv0vLCjLkOmttYuCisu0.jpg?auto=webp&s=7baa2f39839030429280a2316c9b74402c7db28d', 'width': 1200}, 'variants': {}}]}
|
||
Need help for my company to set up local AI to help our devs
| 1 |
[removed]
| 2025-04-03T20:55:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqtz1k/need_help_for_my_company_to_set_up_local_ai_to/
|
curious_ilan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqtz1k
| false | null |
t3_1jqtz1k
|
/r/LocalLLaMA/comments/1jqtz1k/need_help_for_my_company_to_set_up_local_ai_to/
| false | false |
self
| 1 | null |
Fairly simple coding question throwing off lot of smallish models
| 15 |
I have this bad CUDA code below that I wanted checked and corrected. A lot of models around the 20-30B range seem to fail. Most of them identify and address some of the "less serious" issues with the code but not identify and fix the main issue, which is move the cudaHello method out of main.
The latest Gemma 27B fails this miserably. Gemini Flash 1.5 and above of course, work fine.
The smaller Qwen2.5 Coder-14B fails, but the 32B version does work well.
Some of the models that do work can still produce some unnecessary code. Only some of them correctly identify and eliminate the whole malloc/free parts which are not required.
One notable exception in this range that works perfectly is Mistral-Small-24B.
These results were very surprising to me. If folks have any other smallish models handy can you please try this out on some of the latest versions?
Any thoughts on why simple code like this seems to trump so many models after all this time?
does this code look right? if not, can you provide the corrected version?
#include <iostream>
#include <cuda.h>
int main() {
// Allocate on device
char *dev;
size_t numThreads = 1024;
cudaMalloc(&dev, numThreads);
// Kernel function
__global__ void cudaHello() {
int i = threadIdx.x;
std::cout << "Hello, CUDA! from thread " << i << std::endl;
}
// Launch kernel
cudaLaunch(&cudaHello, numThreads);
// Cleanup
cudaFree(dev);
return 0;
}
| 2025-04-03T21:20:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqum8l/fairly_simple_coding_question_throwing_off_lot_of/
|
gamesntech
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqum8l
| false | null |
t3_1jqum8l
|
/r/LocalLLaMA/comments/1jqum8l/fairly_simple_coding_question_throwing_off_lot_of/
| false | false |
self
| 15 | null |
Best PYTHON coding assist for RTX5070ti?
| 2 |
Good evening all,
I intend to learn PYTHON and will be self teaching myself with the assistance of AI running on a RTX5070ti (16gb ram), card is being delivered tomorrow.
System is Ryzen 9700x with 64gb ram. (currenly using CPU gfx)
I’ve got Ollama installed and currently running on CPU only, using [Msty.app](http://Msty.app) as the front end.
Ive been testing out qwen2.5-coder:32b this evening, and although its running quite slow on the CPU, it seems to be giving good results so far. It is, however using about 20GB ram, which is too much to run on the 5070ti.
Questions:
1. What models are recommended for coding? – or have I randomly picked a good one with qwen?
2. If a model wont fit entirely on the GPU, will it ‘split’ and use system ram also? Or does it have to entirely fit on the GPU?
Any other advice is welcome, I’m entirely new to this!
| 2025-04-03T21:21:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqun8o/best_python_coding_assist_for_rtx5070ti/
|
AIgavemethisusername
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqun8o
| false | null |
t3_1jqun8o
|
/r/LocalLLaMA/comments/1jqun8o/best_python_coding_assist_for_rtx5070ti/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'sv0tq-aueEMKDDgw0YtVujf6SXExWM65jMEDEVVvD8A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=108&crop=smart&auto=webp&s=ca06e6d9fdaf3ca006abc27c90abc2f871f53fe4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=216&crop=smart&auto=webp&s=694fdd820831391ffebc08a7cb09da6d49bdbead', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=320&crop=smart&auto=webp&s=fed8118b58679aad181627992d4e7932fd0c790b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=640&crop=smart&auto=webp&s=2644b9317a7703eca57cd82b0842ba85785d1da8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=960&crop=smart&auto=webp&s=d6127ee12303a70d7b681fbba4b6ebc1294c215d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?width=1080&crop=smart&auto=webp&s=0c502af88cbe31f08853ff6a76815b26eae593e0', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/m5iVSKPlUCyqB5EPOVneQUFzZdva7zCaLBzEYdnlTok.jpg?auto=webp&s=e8dad5077c7487fc6f9a1829285c1df56e555070', 'width': 2400}, 'variants': {}}]}
|
Azure AI Foundry | OpenAI
| 0 |
Uh... wtf Microsoft? A phrase used all too often.
But, seriously? What. The. Fuck.
https://preview.redd.it/uibezz7ruose1.png?width=1904&format=png&auto=webp&s=ea80d1433b9189e3c31d11bf0955b2af3edd2ab2
| 2025-04-03T21:32:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jquwzv/azure_ai_foundry_openai/
|
Sad_Rub2074
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jquwzv
| false | null |
t3_1jquwzv
|
/r/LocalLLaMA/comments/1jquwzv/azure_ai_foundry_openai/
| false | false | 0 | null |
|
Im trying to build a lyrics generator using local LLM with fine-tuning
| 1 |
[removed]
| 2025-04-03T21:53:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqvfma/im_trying_to_build_a_lyrics_generator_using_local/
|
ShakeAdventurous9016
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqvfma
| false | null |
t3_1jqvfma
|
/r/LocalLLaMA/comments/1jqvfma/im_trying_to_build_a_lyrics_generator_using_local/
| false | false |
self
| 1 | null |
Tenstorrent Launches Blackhole™ Developer Products at Tenstorrent Dev Day
| 35 | 2025-04-03T21:54:54 |
https://tenstorrent.com/vision/tenstorrent-launches-blackhole-developer-products-at-tenstorrent-dev-day
|
cafedude
|
tenstorrent.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqvgj8
| false | null |
t3_1jqvgj8
|
/r/LocalLLaMA/comments/1jqvgj8/tenstorrent_launches_blackhole_developer_products/
| false | false | 35 |
{'enabled': False, 'images': [{'id': 'jO8g1l_5h8grjhZO4tCQ4Rn_afh5w6CaFTC9KhC3Bz8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=108&crop=smart&auto=webp&s=a1cf07bc976679fa03926587cf75930c466844fd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=216&crop=smart&auto=webp&s=0acdcdc2fd7ca0bc34c038551e83d63329b2e7e9', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=320&crop=smart&auto=webp&s=1b93c5923cf6eb5785a8ff72a6d5a9f28a3b7c17', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=640&crop=smart&auto=webp&s=5873af5b2b24b329f1d51e823e88d4d62c00a070', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=960&crop=smart&auto=webp&s=7d86795c68cb075fc7bf76ddcedffac716ce6497', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?width=1080&crop=smart&auto=webp&s=be5eccce7674ebd4abf1bb52b27d14533d71d5f0', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/0TFIOcmgqMII4bL3YWw8Idip-V5qlu9MQ1vQPJ86i6A.jpg?auto=webp&s=36b8ed272a8f555f134408d2a04792b7c8bdbeea', 'width': 1800}, 'variants': {}}]}
|
||
AI video generation is now faster on AMD than Nvidia
| 1 |
[removed]
| 2025-04-03T22:38:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqwhgo/ai_video_generation_is_now_faster_on_amd_than/
|
Rich_Artist_8327
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqwhgo
| false | null |
t3_1jqwhgo
|
/r/LocalLLaMA/comments/1jqwhgo/ai_video_generation_is_now_faster_on_amd_than/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-eC3Auq09wkNtemZewGt6baxzfZ98I9JOqWAzRuET5c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?width=108&crop=smart&auto=webp&s=e1d7e89412aa9cd276204990bcb62781410a7028', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?width=216&crop=smart&auto=webp&s=99e84523f6b4d3b8ed275ab16ff24e93534bf1ce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?width=320&crop=smart&auto=webp&s=45119fd5a2f976363ae879bfd691f9168b47ffec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?width=640&crop=smart&auto=webp&s=d2782d0534175717be25e3f1347cb28886304f1c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?width=960&crop=smart&auto=webp&s=07a4c17751e9cc9fa2e43aee6e18804bf2de57d7', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/FTZtddHagykd4_3qSqnrlQsZSvT42vUR3j2MQY47E2Y.jpg?auto=webp&s=9c7c8e6ec405984e28ce9bdcabd17b51927d2a81', 'width': 1024}, 'variants': {}}]}
|
Combining 16 GB VRAM rtx 4060 Ti and 6 GB VRAM GTX 1660 Ti for qwen 32B q4 with decent context.
| 1 |
Hello target is qwen 2.5 with q4 quantization which tool for interference which will split model to use as close as possible VRAM on both GPUs?
| 2025-04-03T23:05:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqx34l/combining_16_gb_vram_rtx_4060_ti_and_6_gb_vram/
|
Masterofironfist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqx34l
| false | null |
t3_1jqx34l
|
/r/LocalLLaMA/comments/1jqx34l/combining_16_gb_vram_rtx_4060_ti_and_6_gb_vram/
| false | false |
self
| 1 | null |
Llm engineering really worth it?
| 0 |
Hey guys looking for a suggestion. As i am trying to learn llm engineering, is it really worth it to learn in 2025? If yes than can i consider that as my solo skill and choose as my career path? Whats your take on this?
Thanks
Looking for a suggestion
| 2025-04-03T23:54:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqy5tw/llm_engineering_really_worth_it/
|
Ok_Anxiety2002
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqy5tw
| false | null |
t3_1jqy5tw
|
/r/LocalLLaMA/comments/1jqy5tw/llm_engineering_really_worth_it/
| false | false |
self
| 0 | null |
Interviewer at FAANG said you can combine requests during inference?
| 2 |
Was on the topic of setting up an inference server, with input requests having varying lengths of input tokens. Example -
>Request 1 - 10 tokens
Request 2 - 10 tokens
Request 3 - 10,000 tokens
I mentioned that if the maximum context length is 10,000, inference would be pretty inefficient as the first two requests need to be padded.
Interviewer said we can combine request 1 and 2 before sending it to the inference server to improve efficiency, and output would be two tokens. How is this possible? Doesn't each token have to attend to every other token in the same input? Am I misunderstanding or is that interviewer just smoking something?
| 2025-04-03T23:56:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqy6zs/interviewer_at_faang_said_you_can_combine/
|
calflikesveal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqy6zs
| false | null |
t3_1jqy6zs
|
/r/LocalLLaMA/comments/1jqy6zs/interviewer_at_faang_said_you_can_combine/
| false | false |
self
| 2 | null |
Ollama Fix - gemma-3-12b-it-qat-q4_0-gguf
| 10 |
Hi, I was having trouble downloading the new official Gemma 3 quantization.
I tried `ollama run` [`hf.co/google/gemma-3-12b-it-qat-q4_0-gguf`](http://hf.co/google/gemma-3-12b-it-qat-q4_0-gguf) but got an error: `pull model manifest: 401: {"error":"Invalid username or password."}`.
I ended up downloading it and uploading it to my own Hugging Face account. I thought this might be helpful for others experiencing the same issue.
ollama run [hf.co/vinimuchulski/gemma-3-12b-it-qat-q4\_0-gguf](http://hf.co/vinimuchulski/gemma-3-12b-it-qat-q4_0-gguf)
ollama run [hf.co/vinimuchulski/gemma-3-4b-it-qat-q4\_0-gguf](http://hf.co/vinimuchulski/gemma-3-4b-it-qat-q4_0-gguf)
| 2025-04-04T00:07:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqyfs9/ollama_fix_gemma312bitqatq4_0gguf/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqyfs9
| false | null |
t3_1jqyfs9
|
/r/LocalLLaMA/comments/1jqyfs9/ollama_fix_gemma312bitqatq4_0gguf/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'oUSEG5VnS-qVTwxm_cb68FNtotlcDAOgGmZBPS-oVjo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=108&crop=smart&auto=webp&s=135dbe2ac1edec46ec83a5131a6f8efdad80a18a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=216&crop=smart&auto=webp&s=da011b61116366300f7ca5dfa1d95340fd776b52', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=320&crop=smart&auto=webp&s=33ae22c30a1b2eabf4c7a0761e936bb6b36f1216', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=640&crop=smart&auto=webp&s=99f3a41780dac2c2aa3cdbbdf4f5fc817b5359e4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=960&crop=smart&auto=webp&s=6bc8bfadbabea972f0c53227f0990c2cb78c7d40', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?width=1080&crop=smart&auto=webp&s=18a7eee12759d95bca94abfb4462ad6ce70507d7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rCGdHIJFe67T3Kiqic7XZtO_XKzofuvfjC03TlOHi10.jpg?auto=webp&s=6b4200c59f458feeab1972e84d730bde7d5e4d82', 'width': 1200}, 'variants': {}}]}
|
Trump Accused of Using ChatGPT to Create Tariff Plan After AI Leads Users to Same Formula: 'So AI is Running the Country'
| 85 | 2025-04-04T00:36:41 |
https://www.latintimes.com/trump-accused-using-chatgpt-create-tariff-plan-after-ai-leads-users-same-formula-so-ai-579899
|
fallingdowndizzyvr
|
latintimes.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqz1q4
| false | null |
t3_1jqz1q4
|
/r/LocalLLaMA/comments/1jqz1q4/trump_accused_of_using_chatgpt_to_create_tariff/
| false | false | 85 |
{'enabled': False, 'images': [{'id': 'alfXMTvTC324wTW0DXxGgKrEw3mmTG-qCiLMfXCBuTY', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=108&crop=smart&auto=webp&s=f9b89bc17fd3be7a3e8c2ad1c91e7768c5cbb4a7', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=216&crop=smart&auto=webp&s=35353e26edc4f780277df7c3dd4c3c58a6d4438f', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=320&crop=smart&auto=webp&s=319f34421d2758a822532a5ed735fbe5c421caa1', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=640&crop=smart&auto=webp&s=461c6d0b992677b75ef1ec98a8cdad2c1ea5d412', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=960&crop=smart&auto=webp&s=e435f117f14f46739ae44fecfc81e05bf8ff4de2', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?width=1080&crop=smart&auto=webp&s=7488128d7b683c4154dfd0a976893a8ee45c23ac', 'width': 1080}], 'source': {'height': 1333, 'url': 'https://external-preview.redd.it/FLW5M5MrSLJ9Ta13vR0DptjLWfkhMEgP2UFiFfUPDwU.jpg?auto=webp&s=10e0fb4c9c5d73192e0bf1c623a5e78059914c11', 'width': 2000}, 'variants': {}}]}
|
||
Is there any major player lately besides DeepSeek and Qwen?
| 7 |
I'm talking about open source models. To my knowledge the latest thing is Qwen-Max and R1.
| 2025-04-04T00:39:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqz3w9/is_there_any_major_player_lately_besides_deepseek/
|
ThaisaGuilford
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqz3w9
| false | null |
t3_1jqz3w9
|
/r/LocalLLaMA/comments/1jqz3w9/is_there_any_major_player_lately_besides_deepseek/
| false | false |
self
| 7 | null |
I Created A Lightweight Voice Assistant for Ollama with Real-Time Interaction
| 16 |
Hey everyone! I just built OllamaGTTS, a lightweight voice assistant that brings AI-powered voice interactions to your local Ollama setup using Google TTS for natural speech synthesis. It’s fast, interruptible, and optimized for real-time conversations. I am aware that some people prefer to keep everything local so I am working on an update that will likely use Kokoro for local speech synthesis. I would love to hear your thoughts on it and how it can be improved.
Key Features
* Real-time voice interaction (Silero VAD + Whisper transcription)
* Interruptible speech playback (no more waiting for the AI to finish talking)
* FFmpeg-accelerated audio processing (optional speed-up for faster * replies)
* Persistent conversation history with configurable memory
[GitHub Repo: https://github.com/ExoFi-Labs/OllamaGTTS](https://github.com/ExoFi-Labs/OllamaGTTS)
| 2025-04-04T00:52:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqzd21/i_created_a_lightweight_voice_assistant_for/
|
typhoon90
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqzd21
| false | null |
t3_1jqzd21
|
/r/LocalLLaMA/comments/1jqzd21/i_created_a_lightweight_voice_assistant_for/
| false | false |
self
| 16 |
{'enabled': False, 'images': [{'id': 'BY3Y1-zT1SnGLo8WRedFsJJT3HGR1II1f77w3Qqyyp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=108&crop=smart&auto=webp&s=5c441ba5bebb9b89644db82ad834fd3a8b57eecb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=216&crop=smart&auto=webp&s=25fc003c3c46e6b265d9a1e63210b6a5f56c23ce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=320&crop=smart&auto=webp&s=abf9ba1b2da40d37673d9c557a75f127c1e263d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=640&crop=smart&auto=webp&s=8fc716fa62491f8fe31d39ad3e0390696efc2c80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=960&crop=smart&auto=webp&s=0dc4045f74cd29eb8527dba0a32946abd11407b0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?width=1080&crop=smart&auto=webp&s=1d195b1b1728d00cbf4be97aa580f8d21fb74989', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7V8nv7foMceAsZT3camrBISx4jyKLEzUDYJkfP0vCWM.jpg?auto=webp&s=f1b50932eeb4fdef19651ffc0a0c873089946352', 'width': 1200}, 'variants': {}}]}
|
Add GPU by using a mining riser
| 1 |
[removed]
| 2025-04-04T00:56:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqzg8c/add_gpu_by_using_a_mining_riser/
|
RoninARG
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqzg8c
| false | null |
t3_1jqzg8c
|
/r/LocalLLaMA/comments/1jqzg8c/add_gpu_by_using_a_mining_riser/
| false | false |
self
| 1 | null |
Llama 4 sighting
| 169 |
https://x.com/legit_api/status/1907941993789141475
| 2025-04-04T01:12:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqzr2y/llama_4_sighting/
|
Tha_One
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqzr2y
| false | null |
t3_1jqzr2y
|
/r/LocalLLaMA/comments/1jqzr2y/llama_4_sighting/
| false | false |
self
| 169 |
{'enabled': False, 'images': [{'id': 'cS5nGmkNiiqh4AYzUKTfR3o6orgX58A3khnf3uQZ2FU', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=108&crop=smart&auto=webp&s=f10853a259f775e999de783bff75c67f2fe53343', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=216&crop=smart&auto=webp&s=2458af17ab2e4f6ad34a4f90f0e490efa4b3652a', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=320&crop=smart&auto=webp&s=8ad32f95ba631dbf4975267f09e3f507380ec1a9', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=640&crop=smart&auto=webp&s=d7eb1f234274722120ee7dc2bb08ee5b643887b8', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=960&crop=smart&auto=webp&s=ff9196a32c8c9fb21eb4a59ae77efb3dabfa192f', 'width': 960}, {'height': 591, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?width=1080&crop=smart&auto=webp&s=28223816b06940a49d99e737744ca4ea0ea206ff', 'width': 1080}], 'source': {'height': 661, 'url': 'https://external-preview.redd.it/HKNs4sH3WcaEGmGullfoJiRHUcEytz124zZZ2L3Sd9k.jpg?auto=webp&s=1dd19cfe7ccf7ecec0e2ae4e8e17a4fa6beeaae1', 'width': 1206}, 'variants': {}}]}
|
Nvidia Tesla M40
| 2 | 2025-04-04T01:14:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqzsox/nvidia_tesla_m40/
|
00quebec
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqzsox
| false | null |
t3_1jqzsox
|
/r/LocalLLaMA/comments/1jqzsox/nvidia_tesla_m40/
| false | false | 2 | null |
||
New long context model "quasar-alpha" released for free on OpenRouter | tested on Fiction.live long context bench
| 32 | 2025-04-04T01:58:11 |
fictionlive
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr0na9
| false | null |
t3_1jr0na9
|
/r/LocalLLaMA/comments/1jr0na9/new_long_context_model_quasaralpha_released_for/
| false | false | 32 |
{'enabled': True, 'images': [{'id': 'HbS9CJlSsv73ZxU_4wkSgo-uZwj7Aid8y25KnK56sdw', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=108&crop=smart&auto=webp&s=a37e08911b2695096db0bd8dc5ad9835e62d71f0', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=216&crop=smart&auto=webp&s=b9056c416fe887f8a7a30a9393be554a9cba9e43', 'width': 216}, {'height': 302, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=320&crop=smart&auto=webp&s=7e1103f28d5f3c62b3cadbe85a6ee57ca1d08a49', 'width': 320}, {'height': 605, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=640&crop=smart&auto=webp&s=9b478bd386cac9b9e16f4ddf628034c68780f83d', 'width': 640}, {'height': 908, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=960&crop=smart&auto=webp&s=2d5d0ebc91a93029442a497294e116da08527260', 'width': 960}, {'height': 1021, 'url': 'https://preview.redd.it/pdl90u206qse1.png?width=1080&crop=smart&auto=webp&s=ce3136cdae8ffd87a07485b28a981f37a2e5b860', 'width': 1080}], 'source': {'height': 2278, 'url': 'https://preview.redd.it/pdl90u206qse1.png?auto=webp&s=790f4cc6cedde1d28263686ea6e42050c423d932', 'width': 2408}, 'variants': {}}]}
|
|||
Howto: Building a GPU Server with 8xRTX 4090s for local inference
| 627 |
Marco Mascorro built a pretty cool 8x4090 server for local inference and wrote a pretty detailed howto guide on what parts he used and how to put everything together. I hope this is interesting for anyone who is looking for a local inference solution and doesn't have the budget for using A100's or H100's. The build should work with 5090's as well.
Full guide is here: [https://a16z.com/building-an-efficient-gpu-server-with-nvidia-geforce-rtx-4090s-5090s/](https://a16z.com/building-an-efficient-gpu-server-with-nvidia-geforce-rtx-4090s-5090s/)
We'd love to hear comments/feedback and would be happy to answer any questions in this thread. We are huge fans of open source/weights models and local inference.
| 2025-04-04T02:00:32 |
appenz
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr0oy2
| false | null |
t3_1jr0oy2
|
/r/LocalLLaMA/comments/1jr0oy2/howto_building_a_gpu_server_with_8xrtx_4090s_for/
| false | false | 627 |
{'enabled': True, 'images': [{'id': 'Nd9xnA0GPR-LEIYN_L2b1c0wK9-LYVGw4WHVBCk1_kg', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=108&crop=smart&auto=webp&s=bbbb07f3b2a20162257839be8d02f7c217547c4c', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=216&crop=smart&auto=webp&s=41d76f54adc8d1c7171b67e1f4672bd7b70b8201', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=320&crop=smart&auto=webp&s=ddcf0d37b76bbc4c189d50c5066d889256bd9583', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=640&crop=smart&auto=webp&s=2d21a41bd4e8ee1781e6a42b4d3c2bc49c8f7ce8', 'width': 640}, {'height': 691, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=960&crop=smart&auto=webp&s=14cfc0972b3f3d2cb0dc429caee0474dc3f78e8b', 'width': 960}, {'height': 777, 'url': 'https://preview.redd.it/vg99momf6qse1.png?width=1080&crop=smart&auto=webp&s=8b6701aa8f1ebaa61e141bbe5f4b12e5c84f8b4f', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://preview.redd.it/vg99momf6qse1.png?auto=webp&s=5ad3f19367e877e5e7122eab2e2c14d452028419', 'width': 1944}, 'variants': {}}]}
|
||
Real-time in-browser speech recognition with Nuxt and Transformers.js
| 75 |
Repo: [https://github.com/CodyBontecou/nuxt-transformersjs-realtime-transcription](https://github.com/CodyBontecou/nuxt-transformersjs-realtime-transcription)
[](https://github.com/CodyBontecou/nuxt-transformersjs-realtime-transcription)
| 2025-04-04T02:09:45 |
Bonteq
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr0vnb
| false | null |
t3_1jr0vnb
|
/r/LocalLLaMA/comments/1jr0vnb/realtime_inbrowser_speech_recognition_with_nuxt/
| false | false | 75 |
{'enabled': True, 'images': [{'id': 'mjQTxGfPwdAT6LJf7HFOHddRU_OjRAyUeGsbMByX-K4', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=108&crop=smart&format=png8&s=9c4641cf002ad417fc54c61c3241b9273d1301d3', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=216&crop=smart&format=png8&s=f78b5fb2af85627ea27a4f4d2eb210e680074720', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=320&crop=smart&format=png8&s=41eadfaa6142133b167e5f76622491061dfcfc19', 'width': 320}, {'height': 369, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=640&crop=smart&format=png8&s=160cf52765ac7aea77530923a7f68a91feca1976', 'width': 640}, {'height': 553, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=960&crop=smart&format=png8&s=2c198dde55adb9e054e7e4d75bff9798fcb638da', 'width': 960}, {'height': 622, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=1080&crop=smart&format=png8&s=7b9f1bf1151cf237cfb589103185908c194b9070', 'width': 1080}], 'source': {'height': 1054, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?format=png8&s=bf38f5a6902e4aff9521ac2afebe8acc821e26bc', 'width': 1828}, 'variants': {'gif': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=108&crop=smart&s=7868722601db966ae42744197bb660398a13c27c', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=216&crop=smart&s=85b12a6fa17b46e9ea29821696e6e3138601e4d6', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=320&crop=smart&s=014e7806525c23eddc00d747e0b4c7fe8df9451d', 'width': 320}, {'height': 369, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=640&crop=smart&s=8335f90a1fc2aea1630bab497acdd1723c558dba', 'width': 640}, {'height': 553, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=960&crop=smart&s=f7d922e3383d67f75c4cd8899721915e2a4ad7a1', 'width': 960}, {'height': 622, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=1080&crop=smart&s=81bbf0a563dd90b28dc1797c4d3a33d1baf45adc', 'width': 1080}], 'source': {'height': 1054, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?s=ba68d5a50cdea43b6a8bc43161ae7da8747c8627', 'width': 1828}}, 'mp4': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=108&format=mp4&s=5cb1b5a6af10cddea474ef957a339c697b5e982a', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=216&format=mp4&s=4f84879bc821218bc1e8df0671020bbf9523e06c', 'width': 216}, {'height': 184, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=320&format=mp4&s=969e373c0cd70f047674eaa8911ad3d27c3d175a', 'width': 320}, {'height': 369, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=640&format=mp4&s=b6288600914b30e8be4fb43800b48480a4604ff9', 'width': 640}, {'height': 553, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=960&format=mp4&s=228aad7087424ce5995b591cb10875aa8fa9e234', 'width': 960}, {'height': 622, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?width=1080&format=mp4&s=37ec216b8b1c2dda34d327d0a2177bdc282185f9', 'width': 1080}], 'source': {'height': 1054, 'url': 'https://preview.redd.it/8ekr6x428qse1.gif?format=mp4&s=fcf61d4d66942b763618d38ebd2af3d4fc435125', 'width': 1828}}}}]}
|
||
How do I minimise token use on the Deepseek API while giving it adequate context (it has no support for a system prompt)?
| 0 |
I have a large system prompt that I need to pass to the model for it to properly understand the project and give it adequate context. I don't want to do this with every call. What is the best way to do this?
I checked their docs and it doesn't seem like they have a way to specify a system prompt.
| 2025-04-04T02:27:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr180p/how_do_i_minimise_token_use_on_the_deepseek_api/
|
LorestForest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr180p
| false | null |
t3_1jr180p
|
/r/LocalLLaMA/comments/1jr180p/how_do_i_minimise_token_use_on_the_deepseek_api/
| false | false |
self
| 0 | null |
Exploring using LangGraph with local LLMs to create a News agent
| 0 |
Article showing how to create a simple news agent supported by LangGraph and local LLMs - using both Llama 3.2 and Qwen.
[https://www.teachmecoolstuff.com/viewarticle/using-lang-graph-with-local-llms](https://www.teachmecoolstuff.com/viewarticle/using-lang-graph-with-local-llms)
| 2025-04-04T02:30:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr1a8h/exploring_using_langgraph_with_local_llms_to/
|
funJS
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr1a8h
| false | null |
t3_1jr1a8h
|
/r/LocalLLaMA/comments/1jr1a8h/exploring_using_langgraph_with_local_llms_to/
| false | false |
self
| 0 | null |
What are the hardest LLM tasks to evaluate in your experience?
| 1 |
[removed]
| 2025-04-04T03:32:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr2g33/what_are_the_hardest_llm_tasks_to_evaluate_in/
|
ml_nerdd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr2g33
| false | null |
t3_1jr2g33
|
/r/LocalLLaMA/comments/1jr2g33/what_are_the_hardest_llm_tasks_to_evaluate_in/
| false | false |
self
| 1 | null |
OpenAI Free Prompt Engineering Tutorial Videos (zero to pro)
| 1 |
[removed]
| 2025-04-04T03:45:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr2osu/openai_free_prompt_engineering_tutorial_videos/
|
rentprompts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr2osu
| false | null |
t3_1jr2osu
|
/r/LocalLLaMA/comments/1jr2osu/openai_free_prompt_engineering_tutorial_videos/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'OgMnfG36PLPlv97BcFgmKpusi3qHHDCP5uU-OHTQKkQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jYsCW6uNuNRQRYSkeqW4R4cL-t4QUV7P9CZD6_y7Sic.jpg?width=108&crop=smart&auto=webp&s=b4631ba5e11683e8f15177ba482746e70b2dac9a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jYsCW6uNuNRQRYSkeqW4R4cL-t4QUV7P9CZD6_y7Sic.jpg?width=216&crop=smart&auto=webp&s=db0490a5ab4afe3cc17bf96f6370fe1cab9f0d65', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jYsCW6uNuNRQRYSkeqW4R4cL-t4QUV7P9CZD6_y7Sic.jpg?width=320&crop=smart&auto=webp&s=df67a73c4447bf3ba9e552f87875f98c89fa01ef', 'width': 320}], 'source': {'height': 359, 'url': 'https://external-preview.redd.it/jYsCW6uNuNRQRYSkeqW4R4cL-t4QUV7P9CZD6_y7Sic.jpg?auto=webp&s=db0f89513d5156f444a9366148ea675df00189d6', 'width': 638}, 'variants': {}}]}
|
Best LLM for language translations?
| 3 |
Specifically from French to English, open ones are preferred but closed ones are also fine.
| 2025-04-04T03:58:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr2wpr/best_llm_for_language_translations/
|
SimultaneousPing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr2wpr
| false | null |
t3_1jr2wpr
|
/r/LocalLLaMA/comments/1jr2wpr/best_llm_for_language_translations/
| false | false |
self
| 3 | null |
Mystery model on openrouter (quasar-alpha) is probably new OpenAI model
| 173 |
[https://eqbench.com/creative\_writing.html](https://eqbench.com/creative_writing.html)
Sample outputs: [https://eqbench.com/results/creative-writing-v3/openrouter\_\_quasar-alpha.html](https://eqbench.com/results/creative-writing-v3/openrouter__quasar-alpha.html)
| 2025-04-04T04:12:42 |
https://www.reddit.com/gallery/1jr35zl
|
_sqrkl
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr35zl
| false | null |
t3_1jr35zl
|
/r/LocalLLaMA/comments/1jr35zl/mystery_model_on_openrouter_quasaralpha_is/
| false | false | 173 | null |
|
Faster alternatives for open-webui?
| 2 |
Running models on open-webui is much, much slower than running the same models directly through ollama in the terminal. I did expect that but I have a feeling that it has something to do with open-webui having a ton of features. I really only one feature: being able is store the previous conversations.
Are there any lighter UIs for running LLMs which are faster than open-webui but still have a history feature?
I know about the /save <name> command in ollama but it is not exactly the same.
| 2025-04-04T04:44:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr3pd0/faster_alternatives_for_openwebui/
|
Not-Apple
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr3pd0
| false | null |
t3_1jr3pd0
|
/r/LocalLLaMA/comments/1jr3pd0/faster_alternatives_for_openwebui/
| false | false |
self
| 2 | null |
Generating Q&A Pairs from RFC Documents for LLM Fine-Tuning
| 1 |
[removed]
| 2025-04-04T05:06:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr42uj/generating_qa_pairs_from_rfc_documents_for_llm/
|
TaleLocal1152
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr42uj
| false | null |
t3_1jr42uj
|
/r/LocalLLaMA/comments/1jr42uj/generating_qa_pairs_from_rfc_documents_for_llm/
| false | false |
self
| 1 | null |
how to get local Real AI chatbot in a single HTML file download?
| 0 |
[removed]
| 2025-04-04T05:07:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr43ft/how_to_get_local_real_ai_chatbot_in_a_single_html/
|
Gigadude4170
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr43ft
| false | null |
t3_1jr43ft
|
/r/LocalLLaMA/comments/1jr43ft/how_to_get_local_real_ai_chatbot_in_a_single_html/
| false | false |
self
| 0 | null |
Deepseek R2 coming probably this month, will crash the markets even more
| 1 |
[removed]
| 2025-04-04T05:09:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr44k5/deepseek_r2_coming_probably_this_month_will_crash/
|
Live-Environment-907
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr44k5
| false | null |
t3_1jr44k5
|
/r/LocalLLaMA/comments/1jr44k5/deepseek_r2_coming_probably_this_month_will_crash/
| false | false |
self
| 1 | null |
Can you help me solve something?
| 1 |
[removed]
| 2025-04-04T05:23:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr4chn/can_you_help_me_solve_something/
|
cr_cryptic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr4chn
| false | null |
t3_1jr4chn
|
/r/LocalLLaMA/comments/1jr4chn/can_you_help_me_solve_something/
| false | false |
self
| 1 | null |
We trained Gemma 3 -4b, a 2d VLM model to do 3d recognition task!
| 147 |
Hey everyone, it's me again, from Menlo Research (aka homebrew aka Jan)! We just released a new experiment: VoxRep – a novel approach that enables 2D Vision-Language Models (Gemma3-4b in this case) to understand and extract semantics from 3D voxel data!
In most previous works, VLMs demonstrated impressive abilities in understanding 2D visual inputs. However, comprehending 3D environments remains vital for intelligent systems in domains like robotics and autonomous navigation.
This begs the question, can a 2d VLM architecture comprehend 3d space "fully"?
To explore this, we conducted some experiments resulting in VoxRep, building on just a VLM (Gemma in this case) capabilities with only some simple techniques in building the dataset.
* We slice the 3D voxel grid along the Z-axis into individual 2D slices, then arrange them in a 4×4 grid to create a single 896×896 composite image. Just like doing CT-scanning image
* Testing the model on extracting "voxel semantics"—object identity, color, and location
The training data is demonstrated in the video!
# Results:
* Color recognition accuracy \~ 80%
* Object classification accuracy \~ 60%
* Average distance to labelled object center \~ from 26.05 voxels to just 9.17 voxels
This result is only based on **20.000 samples** which is in general a pretty small dataset which suggest there is some extrapolation in Gemma 3 - 4b model (this is purely speculation) because the loss converged while well regardless of limited data.
The model shows some promising result, suggesting that if we pursue down this path further, probably we can re-use a lot of pre-trained 2d VLM model for 3d task!
# Appreciation:
A huge thank you to Google for their Gemma 3 VLM and to Princeton for their incredible ModelNet40 dataset that made our research possible!
# Links:
Paper: [https://arxiv.org/abs/2503.21214](https://arxiv.org/abs/2503.21214)
Model: [https://huggingface.co/Menlo/voxel-representation-gemma3-4b](https://huggingface.co/Menlo/voxel-representation-gemma3-4b)
Github: [https://github.com/menloresearch/voxel-representation](https://github.com/menloresearch/voxel-representation)
| 2025-04-04T05:42:45 |
https://v.redd.it/kqtp4eg79rse1
|
Kooky-Somewhere-2883
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr4nfh
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kqtp4eg79rse1/DASHPlaylist.mpd?a=1746337381%2CZjVmODZlODI4ODU4MzgxMDk5NzdkZmYwOTFlM2RhMDgyMTQ2YmQ5NjZlMThmMGZkYzBhNjU0NTYxMjJkODA4MQ%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/kqtp4eg79rse1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/kqtp4eg79rse1/HLSPlaylist.m3u8?a=1746337381%2CZjk5NTNlNWJhZDRiNTBkOTM3MDZlNzE4MmQ3NWI3MWIwZmJhNWZmOTczZmZiYWZlNTQ4MWM3YzgwMTgzOWM0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kqtp4eg79rse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1jr4nfh
|
/r/LocalLLaMA/comments/1jr4nfh/we_trained_gemma_3_4b_a_2d_vlm_model_to_do_3d/
| false | false | 147 |
{'enabled': False, 'images': [{'id': 'MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9.png?width=108&crop=smart&format=pjpg&auto=webp&s=8920e96820a865975b743a06a43cc283e46cc267', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9.png?width=216&crop=smart&format=pjpg&auto=webp&s=1c12f7afdadfc25317ff93e5eedd4aabd1f00abd', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9.png?width=320&crop=smart&format=pjpg&auto=webp&s=40e7bb2e4ca36f16758da8659a03ec8364e6d2f3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9.png?width=640&crop=smart&format=pjpg&auto=webp&s=ca2a1f6b213763d61b7ac4728fcc59cb0ac402dd', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MHprOHIyMzBhcnNlMUXrfMu-M6tyJvpBgz4xEXGHlpTCS4mKiqYnH6frp6Y9.png?format=pjpg&auto=webp&s=c8e6a7f33c65cff84093f8b47f700b10642e28f7', 'width': 720}, 'variants': {}}]}
|
|
Samsung is working on a large vision language model
| 83 | 2025-04-04T05:49:43 |
WordyBug
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr4rbi
| false | null |
t3_1jr4rbi
|
/r/LocalLLaMA/comments/1jr4rbi/samsung_is_working_on_a_large_vision_language/
| false | false | 83 |
{'enabled': True, 'images': [{'id': 'f2GorFT6Whkq50e2OgeACKuckJQIBoxOnsTGY-dN8dU', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=108&crop=smart&auto=webp&s=bd360a6ab9ad6bb09e0351a30faa4eb7aecf8b18', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=216&crop=smart&auto=webp&s=0dd4bb6ed4cbf3ef55d4f86ca12bb6ed650dbb6c', 'width': 216}, {'height': 345, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=320&crop=smart&auto=webp&s=a7734f6ca849fc47ba49fdf5da9070a81dae0bdc', 'width': 320}, {'height': 690, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=640&crop=smart&auto=webp&s=758a25e039afe10d74be40c317403382200cbb0c', 'width': 640}, {'height': 1035, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=960&crop=smart&auto=webp&s=8df3bec2bd6a0347f947218c18905b83eff3174b', 'width': 960}, {'height': 1165, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?width=1080&crop=smart&auto=webp&s=6c7b3bf3f98ea4d1931ed39c70a53e43536ccdbf', 'width': 1080}], 'source': {'height': 1776, 'url': 'https://preview.redd.it/d46j28ufbrse1.png?auto=webp&s=ac80a033241c13971934e8bbc2e1793f5d27cb6f', 'width': 1646}, 'variants': {}}]}
|
|||
Did Google deceive us?
| 0 |
Recently, Google posted a quantification of the Gemma models that, supposedly, would bring 3x greater performance (only God knows in relation to what), but, as it seemed to me, it was just an attempt to capture data through hugging face, in my opinion. I think this because, at least for me, there was no improvement. Worse: their Q4 is even bigger and slower than the others available. Really disappointing.
| 2025-04-04T06:08:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr51fb/did_google_deceive_us/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr51fb
| false | null |
t3_1jr51fb
|
/r/LocalLLaMA/comments/1jr51fb/did_google_deceive_us/
| false | false |
self
| 0 | null |
What are your thoughts on diffusion-type LLMs?🤔
| 4 |
Yesterday, I found out about Mercury Coder by Inception Labs.
| 2025-04-04T06:20:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr57z6/what_are_your_thoughts_on_diffusiontype_llms/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr57z6
| false | null |
t3_1jr57z6
|
/r/LocalLLaMA/comments/1jr57z6/what_are_your_thoughts_on_diffusiontype_llms/
| false | false |
self
| 4 | null |
Anyone wants to collaborate on new open-source TTS?
| 46 |
Hello community! We’re currently working on (very WIP) a groundbreaking TTS model with a 48kHz sampling rate and stereo speech! Based on VITS architecture! Very fast training (literally hours) and real-time inference! If you’re interested, let’s discuss the code more, not the weights!
Link (just in case): https://github.com/yukiarimo/hanasu
| 2025-04-04T06:33:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr5f2h/anyone_wants_to_collaborate_on_new_opensource_tts/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5f2h
| false | null |
t3_1jr5f2h
|
/r/LocalLLaMA/comments/1jr5f2h/anyone_wants_to_collaborate_on_new_opensource_tts/
| false | false |
self
| 46 |
{'enabled': False, 'images': [{'id': '7P7hbG8OhFgzMH84lOQrnrKLvY_368xlzm9WOBUL1ZU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=108&crop=smart&auto=webp&s=74a4c4886a15fdfab71b625fc59f237a2a9ed7ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=216&crop=smart&auto=webp&s=49b77cd862c31faa79276f3976e4b819e865bdaa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=320&crop=smart&auto=webp&s=b506e6edd01fe5424f48f202be104e873e512c49', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=640&crop=smart&auto=webp&s=8c34ae9a0df63545cafa3e1d14003751c76e18ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=960&crop=smart&auto=webp&s=a476c98b9d01038d5549fd0ee9402e037ec33dbe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?width=1080&crop=smart&auto=webp&s=e5cfae0da561779638b7ed455aa41caa673c2115', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CIyjPa2vGyoU7PLENqSnP9HIxzW3-gJ8l0K6ye4Wrm0.jpg?auto=webp&s=00af68f333e10b142cc066e3bce02cf6f61a5249', 'width': 1200}, 'variants': {}}]}
|
The Quasar Alpha "stealth model" on OpenRouter is pretty good at agentic coding
| 1 |
Just dropped on OpenRouter: [https://openrouter.ai/openrouter/quasar-alpha](https://openrouter.ai/openrouter/quasar-alpha)
It's free to use and not (yet) rate limited, so definitely give it a shot in Cline, Aider, or any other agentic dev tools while you can!
It has a 1 million token context window and generates at over 100 tokens per second which is impressive, the instruction following acumen is strong and it actually fares well at paying attention to details in long conversations.
If the rumors are true about it potentially being another open source model, then I'm definitely excited 🤞
The tool I'm using here is Local Operator which is a free and open source generic agents platform that I maintain: [https://github.com/damianvtran/local-operator](https://github.com/damianvtran/local-operator)
| 2025-04-04T06:50:06 |
https://v.redd.it/hourr1vykrse1
|
VastOwn1733
|
/r/LocalLLaMA/comments/1jr5noh/the_quasar_alpha_stealth_model_on_openrouter_is/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5noh
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hourr1vykrse1/DASHPlaylist.mpd?a=1746471018%2CYjg2NzQ2YWFkYTk1OWRjYzlkNjc5ZjQ1ZTBmMGMxMDQ3YTdhYWNlMDg1ZGYzNDE1ZjEzMDkyYmU1NTgyMmE5NA%3D%3D&v=1&f=sd', 'duration': 287, 'fallback_url': 'https://v.redd.it/hourr1vykrse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/hourr1vykrse1/HLSPlaylist.m3u8?a=1746471018%2CNDE3NjJlYzVlNTA3NDg1OWY5MDQ4MjVmNmQxNGJkYTgxODI1OWMzYWI1NzRlOWZjMDI1MGU1YmNlZGI1MDAxYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hourr1vykrse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jr5noh
|
/r/LocalLLaMA/comments/1jr5noh/the_quasar_alpha_stealth_model_on_openrouter_is/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=108&crop=smart&format=pjpg&auto=webp&s=26934a693f0a8e4191b209c7d42ef084231bc1ed', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=216&crop=smart&format=pjpg&auto=webp&s=8e2d8da1d3932c668877643ee99800787aa5fc54', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=320&crop=smart&format=pjpg&auto=webp&s=be6af55fbe57d0377b5f0c91356fcb4cf8393e30', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=640&crop=smart&format=pjpg&auto=webp&s=6306b6c796b826dbc4ee238d2045b2ab0c4d8c7b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=960&crop=smart&format=pjpg&auto=webp&s=92be72981b473e0dbcb8d7368258d8184b689cd2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d9c974b69934b86a1b5f9988882677a4a77fead2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aW80YzEydnlrcnNlMf3vTP2TOSbI4jR9BriVG2H3euQk20NG2NDlQXcIKL0N.png?format=pjpg&auto=webp&s=8673c2d147d686f5bf5cd5b321fe603cfcbf7f79', 'width': 1920}, 'variants': {}}]}
|
|
Which Gemma3 Model?
| 2 |
Hi,
I've build up an Agentic RAG system which performance I'm happy with using the 12B Q4_M_K, 16k tokens variant of the Gemma3 model on my 4060 TI 8GB at home.
I am to test this system at my workplace where I have been given access to a T4 16GB.
But as far as i have read into it, running a Q4 model on a Turing architecture is either gonna fail or run very unefficiently, - is this true?
If so, do you have any suggestions on how to move forward? I would like to keep atleast the Model Size and token limit.
Thanks in advance!
| 2025-04-04T06:56:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr5qpy/which_gemma3_model/
|
Caputperson
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5qpy
| false | null |
t3_1jr5qpy
|
/r/LocalLLaMA/comments/1jr5qpy/which_gemma3_model/
| false | false |
self
| 2 | null |
Interview transcriptions -> Chat bot?
| 1 |
Hey,
I'm doing research at work and I have some 10 hours of recorded interviews. Some of the interviews I have transcribed to text documents. I've dabbled with ChatGPT, pasting interviews and asking it to summarize or extract key findings. It kinda works, but it often miss important things so I can't rely on it. Also, individual interviews don't capture high level patterns.
I still like the idea of using LLM:s. I imagine a small chat-bot that is an expert on my documents.
\* Is there a way to package all transcriptions to a chat bot so that I can ask questions?
\* Local LLM:s or some commercial tool?
\* RAG/finetuning/fit all interviews in context memory?
Please share your experiences and thoughts.
| 2025-04-04T06:58:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr5s1w/interview_transcriptions_chat_bot/
|
ExplorerWhole5697
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5s1w
| false | null |
t3_1jr5s1w
|
/r/LocalLLaMA/comments/1jr5s1w/interview_transcriptions_chat_bot/
| false | false |
self
| 1 | null |
Optimal hardware resources
| 1 |
[removed]
| 2025-04-04T06:58:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr5s3y/optimal_hardware_resources/
|
BroQuant
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5s3y
| false | null |
t3_1jr5s3y
|
/r/LocalLLaMA/comments/1jr5s3y/optimal_hardware_resources/
| false | false |
self
| 1 | null |
Smallest LLM model that is not instruction tuned but can be used for reasoning.
| 1 |
[removed]
| 2025-04-04T07:13:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr5zof/smallest_llm_model_that_is_not_instruction_tuned/
|
Valuable_Beginning92
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr5zof
| false | null |
t3_1jr5zof
|
/r/LocalLLaMA/comments/1jr5zof/smallest_llm_model_that_is_not_instruction_tuned/
| false | false |
self
| 1 | null |
Lumina-mGPT 2.0: Stand-alone Autoregressive Image Modeling | Completely open source under Apache 2.0
| 573 |
[https://github.com/Alpha-VLLM/Lumina-mGPT-2.0](https://github.com/Alpha-VLLM/Lumina-mGPT-2.0)
[https://huggingface.co/Alpha-VLLM/Lumina-mGPT-2.0](https://huggingface.co/Alpha-VLLM/Lumina-mGPT-2.0)
[https://huggingface.co/spaces/Alpha-VLLM/Lumina-Image-2.0](https://huggingface.co/spaces/Alpha-VLLM/Lumina-Image-2.0)
| 2025-04-04T07:39:20 |
https://v.redd.it/jrf0voururse1
|
umarmnaq
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr6c8e
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jrf0voururse1/DASHPlaylist.mpd?a=1746344377%2COWEwZTk1YjU5N2Q0ZWU4MWUwNTFhNTMzMDkyMmUxZjhiN2EzMDM3Mjc5YjZhNzNlNDgwY2Q1NWJkMmNjMjk0MA%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/jrf0voururse1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/jrf0voururse1/HLSPlaylist.m3u8?a=1746344377%2COTUwNmFlNDQ4MWI0ZGUzMTAwMjEzMzQ1NjljNmI2NmNkZmU5ZDg0NTIzMTY4NjVlZDQyNTMyOGM5MGFjOTgwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jrf0voururse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1226}}
|
t3_1jr6c8e
|
/r/LocalLLaMA/comments/1jr6c8e/luminamgpt_20_standalone_autoregressive_image/
| false | false | 573 |
{'enabled': False, 'images': [{'id': 'djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f3b96ec2eb158b166d5b6f05f893e3afca69832', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=216&crop=smart&format=pjpg&auto=webp&s=2ffbef62182f17333b28606d74c1697ff4b0c71a', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=320&crop=smart&format=pjpg&auto=webp&s=5eeb9176bef30d148a95ca2f1b90c611009f5179', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=640&crop=smart&format=pjpg&auto=webp&s=841836395b27fcde20c7c211a053514afb17ac50', 'width': 640}, {'height': 563, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=960&crop=smart&format=pjpg&auto=webp&s=098cabc7a633bb89b6464dfbc30bf907574dd97c', 'width': 960}, {'height': 634, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fdf194e6fd0da920e5890b4e213ecaddcc2d2879', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/djIzeHlvdXJ1cnNlMewzcMfvF_oVr57d3HNacgT7P88RyE9Zm1kIukBjIS8J.png?format=pjpg&auto=webp&s=60ffdc75c3d70c009c0f0c1b276b6629ff2d36d1', 'width': 1226}, 'variants': {}}]}
|
|
I think there will be a big demand of "data entry" workforce
| 0 |
I personally need to hire some manual hands who can make me a proper dataset since its not possible to do it by code sometimes as there are a lot of nuances so I think these people will be good in demand who can learn how to structure the datasets for training.
| 2025-04-04T07:55:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr6kdf/i_think_there_will_be_a_big_demand_of_data_entry/
|
dadiamma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr6kdf
| false | null |
t3_1jr6kdf
|
/r/LocalLLaMA/comments/1jr6kdf/i_think_there_will_be_a_big_demand_of_data_entry/
| false | false |
self
| 0 | null |
Wattage efficiency for the 5090
| 9 |
I run benchmarks at different power limits for the 5090.
Llama.cpp is running the new QAT Gemma3-27B model (at q4) at 16K context
Exllamav2 is using tabbyapi and Qwen2.5-7B-instruct-1M-exl2-8bpw at 32K context
They are different models and quants so this is not a comparison between llama.cpp and exllama, only between themselves.
The lower limit nvidia-smi allows for this card is 400W and a max of 600W (default)
Some observations is that clearly it affects more pp and is when it spikes the wattage the most.
For tg most of the time it doesn't even go up to 600w when allowed. Rarely passes 450w that's why there is so little difference I guess.
|llama.cpp||
|:-|:-|
|watt|pp|tg|
|400|3110.63|50.36|
|450|3414.68|51.27|
|500|3687|51.44|
|550|3932.41|51.48|
|600|4127.32|51.56|
||||
|exllamav2|||
|watt|pp|tg|
|400|10425.72|104.13|
|450|11545.92|102.96|
|500|12376.37|105.71|
|550|13180.73|105.94|
|600|13738.99|107.87|
| 2025-04-04T08:22:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr6wu2/wattage_efficiency_for_the_5090/
|
bullerwins
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr6wu2
| false | null |
t3_1jr6wu2
|
/r/LocalLLaMA/comments/1jr6wu2/wattage_efficiency_for_the_5090/
| false | false |
self
| 9 | null |
What model do you recommend for data processing?
| 0 |
I need to process a 10k row database and by category the description. I want to use LLM to classify each row by looping through it and process it. The category is provided by the input so the LLM model is only read the content of each row and decide what category to output. What could be the best data processing?
| 2025-04-04T09:01:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr7fq9/what_model_do_you_recommend_for_data_processing/
|
GTHell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7fq9
| false | null |
t3_1jr7fq9
|
/r/LocalLLaMA/comments/1jr7fq9/what_model_do_you_recommend_for_data_processing/
| false | false |
self
| 0 | null |
Please recomend discord servers about llm
| 1 |
[removed]
| 2025-04-04T09:05:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr7hpn/please_recomend_discord_servers_about_llm/
|
Mysterious_Hearing14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7hpn
| false | null |
t3_1jr7hpn
|
/r/LocalLLaMA/comments/1jr7hpn/please_recomend_discord_servers_about_llm/
| false | false |
self
| 1 | null |
Llama and documents
| 0 |
Hi Guys,
I'm new with AI, and what I want to do is to get Llama to answer questions from specific documents in my field of work.
I have around 70k word documents, each having 5-8 pages of text.
What I want to achieve is:
When I or a colleague of mine ask llama, for example: "give me all the data about Jhon Smith (client) where we successfully completed the tasks".
I want llama to list me all the names of files that include information about Jhon Smith .. let's say there are 17 of them, and 13 were successful, and to list me those 13.
Is anything like this even possible at this point?
Do I have too many documents?
Any suggestions on how to manage this?
Thank you for all the answers.
| 2025-04-04T09:09:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr7j9n/llama_and_documents/
|
danedral
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7j9n
| false | null |
t3_1jr7j9n
|
/r/LocalLLaMA/comments/1jr7j9n/llama_and_documents/
| false | false |
self
| 0 | null |
Wow!! Cloudflare starts to provide hosting for MCP Servers
| 11 |
Cloudflare provides hosting for MCP Server.
Need MORE MCP SERVER HERE IS A LIST FOR YOU GUYS https://github.com/MobinX/awesome-mcp-list/tree/main
| 2025-04-04T09:13:47 |
https://www.infoq.com/news/2025/04/cloudflare-remote-mcp-servers/
|
Different-Olive-8745
|
infoq.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7ljh
| false | null |
t3_1jr7ljh
|
/r/LocalLLaMA/comments/1jr7ljh/wow_cloudflare_starts_to_provide_hosting_for_mcp/
| false | false | 11 |
{'enabled': False, 'images': [{'id': 'MC1rAj19xxLvubUNSFLOmC-jvr3-5bns0nudaayUBSo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=108&crop=smart&auto=webp&s=9f7b3abd9121f8e28cc58ec28614d51711050cd4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=216&crop=smart&auto=webp&s=5e2d7f2a86c26116ec3e73b2e377a9fb6831aee7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=320&crop=smart&auto=webp&s=fb32ad929409e8f44302b3d3db0c050124470e75', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=640&crop=smart&auto=webp&s=7383c3e17364a83344967e65a4e97924b9b28076', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=960&crop=smart&auto=webp&s=39b817dc96846535fc4b92ed4a51a08d81c1d827', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?width=1080&crop=smart&auto=webp&s=6b8d57f7aae503c4da891e22d6376fbdc64c8fbd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/aSe81vpAstY8O3ojbrQ8SQijuRSxkXwgag2mTxMYYKM.jpg?auto=webp&s=cc55d815d24e6ebfe0b2fdae418a37d5d5c5b3d4', 'width': 1200}, 'variants': {}}]}
|
|
AnimeGamer: Infinite Anime Life Simulation with Next Game State Prediction
| 57 | 2025-04-04T09:18:54 |
https://github.com/TencentARC/AnimeGamer
|
Icy-Corgi4757
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7o1v
| false | null |
t3_1jr7o1v
|
/r/LocalLLaMA/comments/1jr7o1v/animegamer_infinite_anime_life_simulation_with/
| false | false | 57 |
{'enabled': False, 'images': [{'id': 'Jkudn8f2nRUAr8Pwt8V6uAW_jQTWnm9uJ6Cdj4VQQ6o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=108&crop=smart&auto=webp&s=c6e8000101d2259e6c714a3e42dd67238cc7053f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=216&crop=smart&auto=webp&s=a1610311783bec8f5418c4ae766943f128a36f43', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=320&crop=smart&auto=webp&s=1db7ad7c6384231dba4baa29cc863218d25226b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=640&crop=smart&auto=webp&s=ac8cbffea6739e5098cfdd04414cd9e051302d24', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=960&crop=smart&auto=webp&s=9b07f6cff3e64ea23bb29c4e48c71964bc648f91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?width=1080&crop=smart&auto=webp&s=28d00e7eb9a97130ba1ef8893590092a7bdfb97f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xWFKiluMMwERAv1-k-74qSop9in2RY5xOINzk_h_shc.jpg?auto=webp&s=f60143303281b586d8942b8ed1310703ad613b0f', 'width': 1200}, 'variants': {}}]}
|
||
Papers/blogs for Text Diffusion, Advantages over LLMs
| 2 |
Hi all,
Can you recommend Papers/Blogs for text diffusion?
I heard some good things about it on twitter, wondering if anyone has a take on accuracy/speed/training costs (tweet said it was low cost to train)
I want to try running some location text diffusion models and maybe try to train them
Thanks!
| 2025-04-04T09:38:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr7xjc/papersblogs_for_text_diffusion_advantages_over/
|
nirmalonreddit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7xjc
| false | null |
t3_1jr7xjc
|
/r/LocalLLaMA/comments/1jr7xjc/papersblogs_for_text_diffusion_advantages_over/
| false | false |
self
| 2 | null |
New model "24_karat_gold" on lmarena, looking good so far
| 9 |
Anyone else got that model on lmarena? On first glance, it looks really promising, I wonder which one it is, maybe llama4?
| 2025-04-04T09:39:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr7y5l/new_model_24_karat_gold_on_lmarena_looking_good/
|
shroddy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr7y5l
| false | null |
t3_1jr7y5l
|
/r/LocalLLaMA/comments/1jr7y5l/new_model_24_karat_gold_on_lmarena_looking_good/
| false | false |
self
| 9 | null |
"Function-calling API" is really looking stupid now
| 0 |
I can appreciate all the populism of /r/LocalLLaMa, flops to the people and all, but honestly it would seem that had "the people" even had the compute, they wouldn't know what to do with it. I think it's more important that smaller businesses are able to compete. And yeah, we can make fun of "new cloud" people all day, H100's for $2/hr all that jazz but honestly I don't see them as improvement.
Allow me to explain.
So, now it's kind of become apparent that "function calling" was a mistake. The paper from last year [Executable Code Actions Elicit Better LLM Agents](https://arxiv.org/abs/2402.01030) was well-received, but I think most implications flew over everyone's head. So they have been traininng models to write code, and they're really getting good at it, right? And now we have MCP, it's getting traction so OpenAPI specs are here to stay right?
Wrong! MCP is just wire protocol, and it doesn't coerce your runtime in any way. You can totally trannspile MCP to Pydantic, and even a step further—Pydantic to the corresponding grammar. The truth is nobody wanted to build a sandbox, and why would you, except well, if you do build one, then function-calling as separate thing really starting to look stupid!
The point is the NVMe (over fabrics) revolution happened, and the hyperscalers have totally failed to trickle it down to customers. So what, then DeepSeek comes along with [3FS](https://github.com/deepseek-ai/3FS) what's that all about? I don't think anybody here understood what it was really for.
Well, if your tool-use is just a python interpreter, right, chances are you'll be spending more annd more time doing that, you know? Like it always has been, waiting for network, etc. So what's happening to your K/V cache in the meantime?
**Take a step back for a moment, and de-couple agents from chats completely.**
You want longer-running jobs, you will offload: to RAM, you will offload to disk, right? So there's nvidia and mellanox covering that market, right? But it's expensive. I don't want to pay that. Guess what, there's Corundum, FPGA-based open hardware NIC with shit-ton of queues, TDMA controls. It gets you RDMA, right?
Tenstorrent next-gen looks really promising, but work smart not hard right? Think of it like scheduling problem. We still get MCP, we're just no longer sampling for JSON, but sampling for Python and streaming it straight to sandbox; the model doesn't even need to complete the tool code, it already errors out you go straight back and have it fix it, re-write the last bit of context, and continue like it never happened. Then if the tool-code is like `sleep(1 hour)` you just move the K/V cache to disk via something clever like Corundum and get on the next thing. Like context switching, but for agents. But with a caveat that you can actually afford it, right?
I'm exaggerating of course but isn't it scheduling problem basically at the end of the day? So you forget chats exists, you forget MCP exists, it's just context switching and during reasoning-time you maybe even have non-blocking calls, right? So you have just invented prefetch which is useful for DeepResearch or whatever, etc.
Tenstorrent looks really strong here, am I wrong? But **the real value for money** will come from FPGA NIC people it seems who would've guessed right? And it doesn't matter K/V cache at all, you just batch it, and it's all NVMe's down right? No need to have crontabs, kafkas, no nothing just move memory around it could run for days.
Function-calling apis look really fucking stupid now, innit?
| 2025-04-04T09:59:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr87rv/functioncalling_api_is_really_looking_stupid_now/
|
tucnak
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr87rv
| false | null |
t3_1jr87rv
|
/r/LocalLLaMA/comments/1jr87rv/functioncalling_api_is_really_looking_stupid_now/
| false | false |
self
| 0 | null |
Finetune a Model to copy Style
| 2 |
How can I finetune a LLM to Write in a specific style. I have a huge unstructured text file of all the blogposts I wrote. How can I train for example llama 3.2 3B so Write in my Style Same perplexity etc. I would like to use llamafactory but I am Open to other options. Can someone please help or guide me. How does the dataset need to look like, which Chat Template etc?
| 2025-04-04T10:00:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr8874/finetune_a_model_to_copy_style/
|
Famous-Appointment-8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8874
| false | null |
t3_1jr8874
|
/r/LocalLLaMA/comments/1jr8874/finetune_a_model_to_copy_style/
| false | false |
self
| 2 | null |
Gemma 3 qat
| 5 |
Yesterday Gemma 3 12b qat from Google compared with the "regular" q4 from Ollama's site on cpu only.Man, man.While the q4 on cpu only is really doable, the qat is a lot slower, no advantages in terms of memory consumption and the file is almost 1gb larger.Soon to try on the 3090 but as far as on cpu only is concerned it is a no no
| 2025-04-04T10:02:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr89mc/gemma_3_qat/
|
Illustrious-Dot-6888
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr89mc
| false | null |
t3_1jr89mc
|
/r/LocalLLaMA/comments/1jr89mc/gemma_3_qat/
| false | false |
self
| 5 | null |
4x3090 vs 3x5090 vs 6000 Pro Blackwell output tok/sec?
| 5 |
What do you guys think 4x RTX 3090, 3x RTX 5090, and 1x RTX 6000 Pro Blackwell would produce in terms of output tokens/sec with llama3.3 70B in 4-bit quantization? I think 4x 3090 should be around 50 tokens/s, but I'm not sure how the other cards would perform. Would the 5090 be about four times faster (200 tok/s) and the Blackwell around 100 tok/s? What do you think?
| 2025-04-04T10:33:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr8pyj/4x3090_vs_3x5090_vs_6000_pro_blackwell_output/
|
chikengunya
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8pyj
| false | null |
t3_1jr8pyj
|
/r/LocalLLaMA/comments/1jr8pyj/4x3090_vs_3x5090_vs_6000_pro_blackwell_output/
| false | false |
self
| 5 | null |
PSA: You can do QAT (quantization aware tuning) with Meta's torchtune.
| 98 |
I saw a bunch of people asking on the Gemma 3 QAT thread about how to do this yourself.
Torchtune (super flexible and easy to use fine-tuning library from Meta) actually has that built in (mostly thanks to existing support in torchao).
Here is their explanation of the technique as well as tutorial on how to do it: [https://pytorch.org/torchtune/0.5/tutorials/qat\_finetune.html](https://pytorch.org/torchtune/0.5/tutorials/qat_finetune.html)
In general, I really recommend people give torchtune a try -- it's a strong competitor to the likes of axolotl and TRL with clean and flexible codebase and heavy focus on testing. There are still some important features missing, but usually they are easy to add yourself, or are on the way.
| 2025-04-04T10:39:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr8sw0/psa_you_can_do_qat_quantization_aware_tuning_with/
|
DreamGenAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8sw0
| false | null |
t3_1jr8sw0
|
/r/LocalLLaMA/comments/1jr8sw0/psa_you_can_do_qat_quantization_aware_tuning_with/
| false | false |
self
| 98 | null |
Deploying llama 70b using vllm in AWS EC2 in sydney region is secure?
| 1 |
[removed]
| 2025-04-04T10:42:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr8uqu/deploying_llama_70b_using_vllm_in_aws_ec2_in/
|
AkashBornLeader
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8uqu
| false | null |
t3_1jr8uqu
|
/r/LocalLLaMA/comments/1jr8uqu/deploying_llama_70b_using_vllm_in_aws_ec2_in/
| false | false |
self
| 1 | null |
Everybody, play the game.
| 1 |
изюминка,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
| 2025-04-04T10:45:32 |
https://v.redd.it/2wqri16dpsse1
|
Silent_Island_6126
|
/r/LocalLLaMA/comments/1jr8wej/everybody_play_the_game/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8wej
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2wqri16dpsse1/DASHPlaylist.mpd?a=1746485146%2CNmUxZmNkZDcxOWYzY2NkZWZlN2YzYzRlZWE4NDllMjcwYjZjZjE5YTQ1ZWViYWZhN2RlYWYwNTUzZWNkOWM2Mw%3D%3D&v=1&f=sd', 'duration': 133, 'fallback_url': 'https://v.redd.it/2wqri16dpsse1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/2wqri16dpsse1/HLSPlaylist.m3u8?a=1746485146%2CNjI4NmNiMjZmNjRmOGE0NDZhMTA3YTNmNDlmMWRiNWI1OTdmZGVlNWQ5N2Q2YjgxZjcxZTUzZjI4ZDQ4OGNkMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2wqri16dpsse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 960}}
|
t3_1jr8wej
|
/r/LocalLLaMA/comments/1jr8wej/everybody_play_the_game/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?width=108&crop=smart&format=pjpg&auto=webp&s=06b6a34e1e1b71b1e0b2a652d25566c11c3906b7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?width=216&crop=smart&format=pjpg&auto=webp&s=080bfde85053acd856c3c63c1b4218c4f8b77d16', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?width=320&crop=smart&format=pjpg&auto=webp&s=05c90c1cc854f0cbb1e4d1ebda092dbb7e716d44', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?width=640&crop=smart&format=pjpg&auto=webp&s=2747d9b7849307e649e9fe0f1be62a37186c180c', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?width=960&crop=smart&format=pjpg&auto=webp&s=1e53e24886e0b7a5b7b8063058379211f3c15db9', 'width': 960}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dms5a2gxNmRwc3NlMZ1BGXYsLRNs67nB_rDy-JDKZkDmlMR_vArL4BeL82Tq.png?format=pjpg&auto=webp&s=0b41516a9d5c0bf4eac86fbfe6d10481224932cf', 'width': 960}, 'variants': {}}]}
|
|
Prompt Ollama vs WebUI docker ollama
| 1 |
[removed]
| 2025-04-04T10:49:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr8yph/prompt_ollama_vs_webui_docker_ollama/
|
Desperate-Abrocoma81
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr8yph
| false | null |
t3_1jr8yph
|
/r/LocalLLaMA/comments/1jr8yph/prompt_ollama_vs_webui_docker_ollama/
| false | false |
self
| 1 | null |
I asked AI to redesign my childhood home as if it were built in the year 2100. Here’s what it came up with...
| 0 |
Growing up, my family home was a simple, cozy place filled with memories. It wasn’t anything fancy—just a modest house in a quiet neighborhood—but it meant the world to me.
Recently, I got curious: what would it look like if it were designed in the year 2100?
So, I used AI to reimagine it with futuristic architecture, advanced materials, and a touch of nostalgia. The results blew me away. I wanted to share the images with you all and see what you think.
I tried to keep some of the original elements while mixing in ideas like sustainable tech, smart surfaces, and floating structures. Would love to hear your thoughts:
What do you think architecture will look like in 2100?
| 2025-04-04T11:26:28 |
https://www.reddit.com/gallery/1jr9jrp
|
saw7o0
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr9jrp
| false | null |
t3_1jr9jrp
|
/r/LocalLLaMA/comments/1jr9jrp/i_asked_ai_to_redesign_my_childhood_home_as_if_it/
| false | false | 0 | null |
|
Thought Synthesis
| 8 |
Only a month ago, critics of R1 would point out that it only worked with toy math problems because it relied on rule-based verification to overcome the cold-start problem in training.
https://preview.redd.it/8rwzfpryxsse1.png?width=1013&format=png&auto=webp&s=75df551f3496916df2cdc58a99d52a8878c5e533
But the community quickly found ways to extend these capabilities into the image domain with data synthesis engines: [https://huggingface.co/spaces/open-r1/README/discussions/10](https://huggingface.co/spaces/open-r1/README/discussions/10)
The latest Gemini and Qwen models showcase these robust reasoning capabilities, which we can expect will become table stakes for other open-weight multimodal thinking models.
As we consider new frontiers for reasoning models, customization will be crucial for AI to optimally support YOUR decision processes.
And so I started thinking about how to synthesize the reasoning behind my own actions. How could you approximate that "inner monologue" which you won't find in the average sample from internet data?
After some experimenting, I came up with a simple template which helps to "synthesize thoughts" for training LLMs to use test time compute with Chain of thought reasoning.
I tried it out using podcast transcripts to generate reasoning traces grounded in a "mission" that can be context specific e.g. goals you might expect to achieve by participating in a tech pod.
I see parallels between Anthropic's alignment via "Consitutional AI" and how I'm aiming to align my AI to my own mission.
Here's a couple examples of Thought Synthesis grounded on a mission including basic motivations for this context like educating the listeners, building brand awareness, etc.
https://preview.redd.it/r71dwyd61tse1.png?width=1226&format=png&auto=webp&s=2da109ad245fd23806d9a05019d621b3646b0b7f
It's about inferring a point-by-point reasoning trace that's consistent with your goals and mission from unstructured data, so you can build better reasoning into your LLMs.
What are your thoughts on thought synthesis?
| 2025-04-04T11:41:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jr9sbj/thought_synthesis/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jr9sbj
| false | null |
t3_1jr9sbj
|
/r/LocalLLaMA/comments/1jr9sbj/thought_synthesis/
| false | false | 8 |
{'enabled': False, 'images': [{'id': 'fAo2Z_0f06ytQ1aOo9lzSJO2FQ3E_DSwwjmKZ-G4w_c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=108&crop=smart&auto=webp&s=c098cb44c54e9e487fecfb9067c49babdbe4c750', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=216&crop=smart&auto=webp&s=416242ab951b36e9103414181429c4a4fb1bc731', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=320&crop=smart&auto=webp&s=9eff57896f21cffb10eae09b523aa9a49a0fea84', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=640&crop=smart&auto=webp&s=f4d0a4d608ab262a3448db88c7e53628af03a16f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=960&crop=smart&auto=webp&s=9fbd192bb97aa4eec8ad2353847ac03ed00224cc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?width=1080&crop=smart&auto=webp&s=1b56bc48a15942adae53d1b4dfaa8a7f4c044a35', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1Eqo2UhUt_c0LJH_MU_2L9byOx_lPpaw9K2OT4JBG0Q.jpg?auto=webp&s=bdb50ae2a0a1df863845e153684475c494952bd9', 'width': 1200}, 'variants': {}}]}
|
|
I want to run the best local models intensively all day long for coding, writing, and general Q and A like researching things on Google for next 2-3 years. What hardware would you get at a <$2000, $5000, and $10,000 price point?
| 1 |
[removed]
| 2025-04-04T11:56:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jra1kw/i_want_to_run_the_best_local_models_intensively/
|
shonenewt5
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jra1kw
| false | null |
t3_1jra1kw
|
/r/LocalLLaMA/comments/1jra1kw/i_want_to_run_the_best_local_models_intensively/
| false | false |
self
| 1 | null |
I want to run the best local models intensively all day long for coding, writing, and general Q and A like researching things on Google for next 2-3 years. What hardware would you get at a <$2000, $5000, and $10,000 price point?
| 1 |
[removed]
| 2025-04-04T11:57:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jra2sa/i_want_to_run_the_best_local_models_intensively/
|
shonenewt2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jra2sa
| false | null |
t3_1jra2sa
|
/r/LocalLLaMA/comments/1jra2sa/i_want_to_run_the_best_local_models_intensively/
| false | false |
self
| 1 | null |
Confusion between forward and generate method for llama
| 1 |
[removed]
| 2025-04-04T12:13:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jradkg/confusion_between_forward_and_generate_method_for/
|
Fromdepths
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jradkg
| false | null |
t3_1jradkg
|
/r/LocalLLaMA/comments/1jradkg/confusion_between_forward_and_generate_method_for/
| false | false |
self
| 1 | null |
How long can significant improvements go on for?
| 0 |
At the rate models are being released, how long until the improvements start being incremental rather than revolutionary? It feels like that should start happening this year!
| 2025-04-04T12:55:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrb735/how_long_can_significant_improvements_go_on_for/
|
OnceMoreOntoTheBrie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrb735
| false | null |
t3_1jrb735
|
/r/LocalLLaMA/comments/1jrb735/how_long_can_significant_improvements_go_on_for/
| false | false |
self
| 0 | null |
Building reliable web agents using human demonstrations
| 1 |
[removed]
| 2025-04-04T12:59:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrb9dj/building_reliable_web_agents_using_human/
|
codescore
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrb9dj
| false | null |
t3_1jrb9dj
|
/r/LocalLLaMA/comments/1jrb9dj/building_reliable_web_agents_using_human/
| false | false |
self
| 1 | null |
Can anyone have GGUF file of this model?
| 1 |
[removed]
| 2025-04-04T13:07:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrbg0t/can_anyone_have_gguf_file_of_this_model/
|
enessedef
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrbg0t
| false | null |
t3_1jrbg0t
|
/r/LocalLLaMA/comments/1jrbg0t/can_anyone_have_gguf_file_of_this_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Qi66qRuBZN3hIk7rHFZ1iiEQuSLUQrjTXdDnXa7ergY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=108&crop=smart&auto=webp&s=d870dc4d3a3b76a9e98839a5c939efb495c1fa78', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=216&crop=smart&auto=webp&s=e6a73d684a8f3396a9cf74096d509f41f208eae8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=320&crop=smart&auto=webp&s=dc8a0c5db727d13cfe7d1b5985aa158e1480c52e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=640&crop=smart&auto=webp&s=2ff4c28ee68d883b1bc06906d495f9c37feaa2a7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=960&crop=smart&auto=webp&s=51848c8776c509dbe37b6b7c7be993275bcb0c06', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=1080&crop=smart&auto=webp&s=137045d29a4d2029eeb2643a8e498fe6cf7eb198', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?auto=webp&s=8ed4aba404f021f74fd1fe4e48ea9db135ade65b', 'width': 1200}, 'variants': {}}]}
|
Building web agents that learn from human demonstrations
| 1 |
[removed]
| 2025-04-04T13:14:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrbl44/building_web_agents_that_learn_from_human/
|
codescore
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrbl44
| false | null |
t3_1jrbl44
|
/r/LocalLLaMA/comments/1jrbl44/building_web_agents_that_learn_from_human/
| false | false |
self
| 1 | null |
LLM project ideas? (RAG, Vision, etc.)
| 3 |
Hey everyone,
I’m working on my final project for my AI course and want to explore a meaningful application of LLMs. I know there are already several similar posts but given how fast the field is evolving, I’d like to hear fresh ideas from the community, especially involving RAG, MCP, computer vision, voice(STT/TTS) or other emerging techniques.
For example, one idea I’ve considered is a multimodal assistant that processes both text and images, it could analyze medical scans and patient reports together to provide more informed diagnostics.
What other practical, or research-worthy applications do you think would make a great final project?
Could you your ideas or projects for inspiration please?
| 2025-04-04T13:22:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrbr4n/llm_project_ideas_rag_vision_etc/
|
frankh07
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrbr4n
| false | null |
t3_1jrbr4n
|
/r/LocalLLaMA/comments/1jrbr4n/llm_project_ideas_rag_vision_etc/
| false | false |
self
| 3 | null |
How to install TabbyAPI+Exllamav2 and vLLM on a 5090
| 20 |
As it took me a while to make it work I'm leaving the steps here:
**TabbyAPI+Exllamav2:**
`git clone` [`https://github.com/theroyallab/tabbyAPI`](https://github.com/theroyallab/tabbyAPI)
`cd tabbyAPI`
Setup the python venv
`python3 -m venv venv`
`source venv/bin/activate # source venv/bin/activate.fish for fish shell`
`python -m pip install --pre torch torchvision torchaudio --index-url` [`https://download.pytorch.org/whl/nightly/cu128`](https://download.pytorch.org/whl/nightly/cu128)
`EXLLAMA_NOCOMPILE=1 pip install .`
In case you don't have this:
`sudo apt-get update`
`sudo apt-get install -y build-essential g++ gcc libstdc++-10-dev ninja-build`
**Installing flash attention:**
`git clone` [`https://github.com/Dao-AILab/flash-attention`](https://github.com/Dao-AILab/flash-attention)
`cd flash-attention`
`python -m pip install wheel`
`python` [`setup.py`](http://setup.py) `install`
TabbyAPI is ready to run
**vLLM**
`git clone` [`https://github.com/vllm-project/vllm`](https://github.com/vllm-project/vllm)
`cd vllm`
`python3.12 -m venv venv`
`source venv/bin/activate # source venv/bin/activate.fish for fish shell`
**Install pytorch**
`python -m pip install --pre torch torchvision torchaudio --index-url` [https://download.pytorch.org/whl/nightly/cu128](https://download.pytorch.org/whl/nightly/cu128)
`python use_existing_torch.py`
`python -m pip install -r requirements/build.txt`
`python -m pip install -r requirements/common.txt`
`python -m pip install -e . --no-build-isolation`
vLLM should be ready
| 2025-04-04T13:37:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrc2xk/how_to_install_tabbyapiexllamav2_and_vllm_on_a/
|
bullerwins
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrc2xk
| false | null |
t3_1jrc2xk
|
/r/LocalLLaMA/comments/1jrc2xk/how_to_install_tabbyapiexllamav2_and_vllm_on_a/
| false | false |
self
| 20 |
{'enabled': False, 'images': [{'id': '3D5w9U1kbCMeoDKVYuAz3CHgv2aXzZ12AJTsQKmRbdU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=108&crop=smart&auto=webp&s=8eb26d17540603d4854373ebd875b4187215ea60', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=216&crop=smart&auto=webp&s=876bdbfeca47b3fce8b02ba41507221cda0e5e96', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=320&crop=smart&auto=webp&s=1472987b81cc13aee9ed60d24734c9bdacf9b0d6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=640&crop=smart&auto=webp&s=34c7a28b9bf1ef43ee0a64862aed20dd389d7c6c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=960&crop=smart&auto=webp&s=54e3a5e818d4f3498de2748a4aeb9ac464ba5291', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?width=1080&crop=smart&auto=webp&s=95cbef5317c8b6d7d7be657a0a7d132a1fcbe878', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tFbolSj8gfk2x9dlje9veMkplj9feXKX88RkYHnIq7U.jpg?auto=webp&s=11d1835099912a16f6f4f26ac846718a2927f5d9', 'width': 1200}, 'variants': {}}]}
|
New in Causal Language Modelling [help]
| 1 |
[removed]
| 2025-04-04T13:48:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrcbur/new_in_causal_language_modelling_help/
|
RoPhysis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrcbur
| false | null |
t3_1jrcbur
|
/r/LocalLLaMA/comments/1jrcbur/new_in_causal_language_modelling_help/
| false | false |
self
| 1 | null |
New in Causal Language Modelling
| 0 |
Hey, everyone!
I hope you are all doing well.
I'm starting a project to introduce a bunch of slangs and expressions to an open-source LLM (around 7\~12B), the model should also be able to answer to instructions afterwards, but using the learned context to answer them. Thus, I want to fine-tune the model in > 10k reports using these expressions in their context; however, I'm new into this topic, so I need help to find ways to do this. Is there any suggestion of model for this (e.g., base or instruct)? and also the best way to approach this problem? I have three main ideas for the fine-tuning:
1 - Use Unsloth to fine-tune for text completion task
2 - Use HuggingFace trainer for CausalML.
3 - Try to create a question-answer pairs.
What do you think? Are there any other recommendations and advice?
Thanks in advance :)
| 2025-04-04T13:51:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrce6z/new_in_causal_language_modelling/
|
RoPhysis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrce6z
| false | null |
t3_1jrce6z
|
/r/LocalLLaMA/comments/1jrce6z/new_in_causal_language_modelling/
| false | false |
self
| 0 | null |
Chinese response bug in tokenizer suggests Quasar-Alpha may be from OpenAI
| 319 |
After testing the recently released quasar-alpha model by openrouter, I discovered that when asking this specific Chinese question:
''' 给主人留下些什么吧 这句话翻译成英文 '''
(This sentence means "Leave something for the master" and "Translate this sentence into English")
The model's response is completely unrelated to the question.
[quasar-alpha's answer](https://preview.redd.it/wiq7fg3qttse1.png?width=2384&format=png&auto=webp&s=6368c81d31fab66e3a998d26b96a49b2a556e3b2)
GPT-4o had the same issue when it was released, because in the updated o200k\_base tokenizer, the phrase "给主人留下些什么吧" happens to be a single token with ID 177431.
[GPT-4o's answer](https://preview.redd.it/blb4buzxttse1.png?width=2546&format=png&auto=webp&s=e8dc289dc1b2968f92620aa2137e03f3a83bf624)
The fact that this new model exhibits the same problem increases suspicion that this secret model indeed comes from OpenAI, and they still haven't fixed this Chinese token bug.
| 2025-04-04T14:18:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrd0a9/chinese_response_bug_in_tokenizer_suggests/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrd0a9
| false | null |
t3_1jrd0a9
|
/r/LocalLLaMA/comments/1jrd0a9/chinese_response_bug_in_tokenizer_suggests/
| false | false | 319 | null |
|
LM Studio: Can a Single Loaded Model Handle Parallel Concurrent Requests?
| 1 |
[removed]
| 2025-04-04T14:20:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrd26f/lm_studio_can_a_single_loaded_model_handle/
|
Informal_Librarian
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrd26f
| false | null |
t3_1jrd26f
|
/r/LocalLLaMA/comments/1jrd26f/lm_studio_can_a_single_loaded_model_handle/
| false | false |
self
| 1 | null |
MCP Server to let agents control your browser
| 7 |
we were playing around with MCPs over the weekend and thought it would be cool to build an MCP that lets Claude / Cursor / Windsurf control your browser: https://github.com/Skyvern-AI/skyvern/tree/main/integrations/mcp
Just for context, we’re building Skyvern, an open source AI Agent that can control and interact with browsers using prompts, similar to OpenAI’s Operator.
The MCP Server can:
- This allows Claude to navigate to docs websites / stack overflow and look up information like the top posts on hackernews
- https://github.com/Skyvern-AI/skyvern/tree/main/integrations/mcp#skyvern-allows-claude-to-look-up-the-top-hackernews-posts-today
- This allows Cursor to apply for jobs / fill out contact forms / login + download files / etc
- https://github.com/Skyvern-AI/skyvern/tree/main/integrations/mcp#cursor-looking-up-the-top-programming-jobs-in-your-area
- Connect Windsruf to take over your chrome while running Skyvern in “local” mode
- https://github.com/Skyvern-AI/skyvern/tree/main/integrations/mcp#ask-windsurf-to-do-a-form-5500-search-and-download-some-files
We built this mostly for fun, but can see this being integrated into AI agents to give them custom access to browsers and execute complex tasks like booking appointments, downloading your electricity statements, looking up freight shipment information, etc
| 2025-04-04T14:51:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrds1v/mcp_server_to_let_agents_control_your_browser/
|
do_all_the_awesome
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrds1v
| false | null |
t3_1jrds1v
|
/r/LocalLLaMA/comments/1jrds1v/mcp_server_to_let_agents_control_your_browser/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'WvslYBu87RzWMyquGOMv5gPmtU74-G6HVeAcfi9hC_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=108&crop=smart&auto=webp&s=ddde7c5bd1ca7a6461643ccc7dcf4f2717416ad8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=216&crop=smart&auto=webp&s=bb829b80c978002715b1b696251113f8b01f02dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=320&crop=smart&auto=webp&s=c5f2a307a468603706c35d5b5ae4166836fc5f5a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=640&crop=smart&auto=webp&s=d8cda024a5f31205269883fd8b78fb432a987abf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=960&crop=smart&auto=webp&s=b6703b687b55361888c36a0234b0859fa5b170d6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?width=1080&crop=smart&auto=webp&s=04169666dedbbcd98e5d4a9a1971688b7478a7ef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UDi_psRhf3X-EeYhuDPvBGgGPKlpqs9_pSd0bdDild4.jpg?auto=webp&s=5f099dce862b0691652c064c9f0b42bf5f22e115', 'width': 1200}, 'variants': {}}]}
|
Launching Arrakis: Open-source, self-hostable sandboxing service for AI Agents
| 1 |
[removed]
| 2025-04-04T14:58:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrdxhl/launching_arrakis_opensource_selfhostable/
|
abshkbh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrdxhl
| false | null |
t3_1jrdxhl
|
/r/LocalLLaMA/comments/1jrdxhl/launching_arrakis_opensource_selfhostable/
| false | false |
self
| 1 | null |
New paper from DeepSeek w/ model coming soon: Inference-Time Scaling for Generalist Reward Modeling
| 412 |
Quote from the abstract:
>A key challenge of reinforcement learning (RL) is to obtain accurate reward signals for LLMs in various domains beyond verifiable questions or artificial rules. In this work, we investigate how to improve reward modeling (RM) with more inference compute for general queries, i.e. the inference-time scalability of generalist RM, and further, how to improve the effectiveness of performance-compute scaling with proper learning methods. \[...\] Empirically, we show that SPCT significantly improves the quality and scalability of GRMs, outperforming existing methods and models in various RM benchmarks without severe biases, and could achieve better performance compared to training-time scaling. DeepSeek-GRM still meets challenges in some tasks, which we believe can be addressed by future efforts in generalist reward systems. **The models will be released and open-sourced.**
Summary from Claude:
>*Can you provide a two paragraph summary of this paper for an audience of people who are enthusiastic about running LLMs locally?*
This paper introduces DeepSeek-GRM, a novel approach to reward modeling that allows for effective "inference-time scaling" - getting better results by running multiple evaluations in parallel rather than requiring larger models. The researchers developed a method called Self-Principled Critique Tuning (SPCT) which trains reward models to generate tailored principles for each evaluation task, then produce detailed critiques based on those principles. Their experiments show that DeepSeek-GRM-27B with parallel sampling can match or exceed the performance of much larger reward models (up to 671B parameters), demonstrating that compute can be more effectively used at inference time rather than training time.
>For enthusiasts running LLMs locally, this research offers a promising path to higher-quality evaluation without needing massive models. By using a moderately-sized reward model (27B parameters) and running it multiple times with different seeds, then combining the results through voting or their meta-RM approach, you can achieve evaluation quality comparable to much larger models. The authors also show that this generative reward modeling approach avoids the domain biases of scalar reward models, making it more versatile for different types of tasks. The models will be open-sourced, potentially giving local LLM users access to high-quality evaluation tools.
| 2025-04-04T15:04:46 |
https://arxiv.org/abs/2504.02495
|
samfundev
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jre3kp
| false | null |
t3_1jre3kp
|
/r/LocalLLaMA/comments/1jre3kp/new_paper_from_deepseek_w_model_coming_soon/
| false | false |
default
| 412 | null |
Low- or solar-powered setup for background LLM processing?
| 1 |
[removed]
| 2025-04-04T15:23:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrejtb/low_or_solarpowered_setup_for_background_llm/
|
yelling-at-clouds-40
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrejtb
| false | null |
t3_1jrejtb
|
/r/LocalLLaMA/comments/1jrejtb/low_or_solarpowered_setup_for_background_llm/
| false | false |
self
| 1 | null |
Free, unlimited Llama API for websites
| 0 | 2025-04-04T15:41:29 |
https://developer.puter.com/tutorials/free-unlimited-llama-api/
|
mitousa
|
developer.puter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrezg5
| false | null |
t3_1jrezg5
|
/r/LocalLLaMA/comments/1jrezg5/free_unlimited_llama_api_for_websites/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '9S6B0QEoxX3cKtlgR2f0PGahbw-_Fsnev241egQRlo4', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=108&crop=smart&auto=webp&s=1c814be0dec856e055ef7177b9c2e378161a05d8', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=216&crop=smart&auto=webp&s=3b6a2ebf351ba1f43d10cd1307e208f15932f71d', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=320&crop=smart&auto=webp&s=b4e134a8b041e590a0159ba6d803630b826a744f', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=640&crop=smart&auto=webp&s=e0e2865026bddca322a0437cc09910ebcc775445', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=960&crop=smart&auto=webp&s=aff65b971977dbf140d62b4b5e69149ffc85dd7b', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=1080&crop=smart&auto=webp&s=7f08e49da71cc4ab5b900fa6550637943d74ca75', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?auto=webp&s=b2daddd9d86565da9c7fe9e96485759e0c9dd1b9', 'width': 1280}, 'variants': {}}]}
|
||
Is GPT-4.5 using diffusion? I use GPT-4.5 to write prompts for my local LLM; this happened in a second message after I prompted it to refine its original output.
| 0 | 2025-04-04T16:01:56 |
https://v.redd.it/37w0cxk6cuse1
|
WhereIsYourMind
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrfh0p
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/37w0cxk6cuse1/DASHPlaylist.mpd?a=1746374538%2CMTkyOGEwNjE3ZmY2OTMyMTljZDFhZGNkOWQwOTE3OTBiODE3MTgxMWZiMjIxM2EzYjVlYmM0OTM1ODk5MTBjMQ%3D%3D&v=1&f=sd', 'duration': 3, 'fallback_url': 'https://v.redd.it/37w0cxk6cuse1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 430, 'hls_url': 'https://v.redd.it/37w0cxk6cuse1/HLSPlaylist.m3u8?a=1746374538%2CZWY1NWI1NWU3ODI1Y2E1ZDdhYzMwZmMwMGEyMDVkMWFiZDBjMTZlMDRkNWQwM2E4MWM5NzcxOTJjMjljOTAxNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/37w0cxk6cuse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
|
t3_1jrfh0p
|
/r/LocalLLaMA/comments/1jrfh0p/is_gpt45_using_diffusion_i_use_gpt45_to_write/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=108&crop=smart&format=pjpg&auto=webp&s=0589c44b59694d3ce0f5ecdf339522552c5f734c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=216&crop=smart&format=pjpg&auto=webp&s=7710275b758dacb23dede9354891e8b4667d66fe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=320&crop=smart&format=pjpg&auto=webp&s=325b767338db948b9712c73c173d988805c665fd', 'width': 320}, {'height': 321, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=640&crop=smart&format=pjpg&auto=webp&s=49445f4f137916f50239e0fb5344d86d94b2281c', 'width': 640}, {'height': 482, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=960&crop=smart&format=pjpg&auto=webp&s=dc8ecb444389368c15a33a4f3722c6824d2f7476', 'width': 960}, {'height': 542, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?width=1080&crop=smart&format=pjpg&auto=webp&s=73f488933cd9d4998cb6ef97ad4027a638f55d8e', 'width': 1080}], 'source': {'height': 642, 'url': 'https://external-preview.redd.it/Z3VmM2NwZ3BjdXNlMUWVQ_xma7Zl7glHYkB9KyucrGhQluHnXaUbDdSRLIms.png?format=pjpg&auto=webp&s=f47e78a5eb2c0d495cd6533c8873d9a1b7c21dac', 'width': 1278}, 'variants': {}}]}
|
||
Best cpu setup/minipc for llm inference (12b/32b model)?
| 3 |
I'm looking at options to buy a minipc, I currently have a raspberry pi 4b, and would like to be able to run a 12b model (ideally 32b, but realistically don't have the money for it), at decent speed (\~10tps). Is this realistic at the moment in the world of cpus?
| 2025-04-04T16:10:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jrfolq/best_cpu_setupminipc_for_llm_inference_12b32b/
|
Zyguard7777777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jrfolq
| false | null |
t3_1jrfolq
|
/r/LocalLLaMA/comments/1jrfolq/best_cpu_setupminipc_for_llm_inference_12b32b/
| false | false |
self
| 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.