title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Calorie Tracking with Llama3.2 Vision and Ollama
| 1 |
[removed]
| 2025-04-21T22:43:49 |
https://v.redd.it/3a83sxhrn9we1
|
oridnary_artist
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4qxyu
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3a83sxhrn9we1/DASHPlaylist.mpd?a=1747867446%2CNzFhY2VmYWE0ZWJkMWI0OTc0YTUzNjI0ZTIxNzBjYmQ5ZjJiOTA2NGZiMWEzZjA4Y2FmMDgyNzdjNjhjYWRhYQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/3a83sxhrn9we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3a83sxhrn9we1/HLSPlaylist.m3u8?a=1747867446%2CYzM5ODg1N2VkYTY1NDkxNjJiOWUxMDIwMTYyOGE5N2JjY2NkMjBhMGJkY2ViMzJiZGVkMGQ3ZDQwZTBjMzJhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3a83sxhrn9we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k4qxyu
|
/r/LocalLLaMA/comments/1k4qxyu/calorie_tracking_with_llama32_vision_and_ollama/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=89884716fd1e6cd9d66ef1271a7e4ca81b2717b4', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=35448411d354bf3fba93a47c7e7cb7956636d0a7', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=86e39e13d1f171f835d31bd44771127b0800c02f', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=df2e7c8cc1bf7b4df46ec267acad53251fa622fb', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=ecb99ea650b469e9c8dfb6d71594528060e902c6', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=01b349a0b378b7ff5fedded9f1af46e28849a7fb', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/ZXR1czF6aHJuOXdlMSO2HtOD7liCSqXfdiO2T6IRciWpKQkkiv6DT7r9sdLZ.png?format=pjpg&auto=webp&s=b38bdaa9d3f66c99f579ca039822afe35c4eeada', 'width': 1692}, 'variants': {}}]}
|
|
Calorie Tracking with Llama3.2 Vision and Ollama
| 1 |
[removed]
| 2025-04-21T22:46:46 |
https://v.redd.it/9lk1hhxdo9we1
|
Maleficent-Penalty50
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4r0c7
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9lk1hhxdo9we1/DASHPlaylist.mpd?a=1747867626%2CYTU4ZDRkZTJjMWYyYjEwNjBiMTZiYzIzNWJlZjQxY2JjNzIwYmQyYWI1MmMyNzg1NTE1MDVhNGU1ZjcyZTg1Ng%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/9lk1hhxdo9we1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/9lk1hhxdo9we1/HLSPlaylist.m3u8?a=1747867626%2CMjk4M2Y2OWMwYzU4OTc0N2RkYmNiMjA0YTI5ZGQyNjdiODVkMzM1NGUyNzljNjdkNWVlNTFkYjc3M2JjOTkxNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9lk1hhxdo9we1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
|
t3_1k4r0c7
|
/r/LocalLLaMA/comments/1k4r0c7/calorie_tracking_with_llama32_vision_and_ollama/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=108&crop=smart&format=pjpg&auto=webp&s=d3fa16c3408f6360f736ca24d6cce1d1148d48ac', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=216&crop=smart&format=pjpg&auto=webp&s=373854a987b916fe4586133fa1ca6f939ba96781', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=320&crop=smart&format=pjpg&auto=webp&s=5aa08cb664d4b5d64bf3d3384de77ea0e7516746', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=640&crop=smart&format=pjpg&auto=webp&s=cbc33efb621f10149f4f606b8ea802e9eed3f21e', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=960&crop=smart&format=pjpg&auto=webp&s=75a138b0a63eee7809bab8efd13d63a780f1c5bc', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6cfd37d5009f5c361ff038c244cee5f1cf963e35', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/cjFjcnVwd2RvOXdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?format=pjpg&auto=webp&s=ec596814b25cb0414721b074180e9dedddce1ca7', 'width': 1692}, 'variants': {}}]}
|
|
what are best open source chat models under 32b parameters now ?
| 1 |
[removed]
| 2025-04-21T22:48:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4r1qv/what_are_best_open_source_chat_models_under_32b/
|
Maleficent-Penalty50
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4r1qv
| false | null |
t3_1k4r1qv
|
/r/LocalLLaMA/comments/1k4r1qv/what_are_best_open_source_chat_models_under_32b/
| false | false |
self
| 1 | null |
Gemini 2.5 - The BEST writing assistant. PERIOD.
| 7 |
Let's get to the point: **Google Gemini 2.5 is THE BEST writing assistant. Period.**
I've tested everything people have recommended (mostly). I've tried Claude. DeepSeek R1. GPT-4o. Qwen 2.5. Qwen 2.5 VL. QWQ. Mistral variants. Cydonia variants. Gemma variants. Darkest Muse. Ifable. And more.
**My use case:** I'm not interested in an LLM writing a script for me. I can do that myself just fine. I want it to work based on a specified template that I give it, and create a detailed treatment based on a set of notes. The template sets the exact format of how it should be done, and provides instructions on my own writing method and goals. I feed it the story notes. Based on my prompt template, I expect it to be able to write a fully functioning treatment.
I want **specifics**. Not abstract ideas - which most LLMs struggle with - but literal scenes. Show, don't tell.
**My expectations**: Intelligence. Creativity. Context. Relevance. Inventiveness. Nothing contrived. No slop. The notes should drive the drama. The treatment needs to maintain its own consistency. It needs to know what it's doing and why it's doing it. Like a writer.
Every single llm either flat-out failed the assignment, or turned out poor results. The caveat: The template is a bit wordy, and the output will naturally be wordy. I typically expect - at the minimum - 8K ouput, based on the requirements.
**Gemini 2.5 is the only LLM that completed the assignment 100% correctly, and did a really good job.**
It isn't perfect. There was one output that started spitting out races and cultures that were obviously from Star Wars. Clearly part of its training data. It was garbage. But that was a one-off.
Subsequent outputs were of varying quality, but generally decent. But the most important part: **all of them correctly completed the assignment.**
Gemini kept every scene building upon the previous ones. It directed it towards a natural conclusion. It built upon the elements within the story that **IT** created, and used those to fashion a unique outcome. It succeeded in maintaining the character arc and the character's growth. It was able to complete certain requirements within the story despite not having a lot of specific context provided from my notes. It raised the tension. And above all, it maintained the rigid structure without going off the rails into a random rabbit hole.
At one point, I got so into it that I just reclined, reading from my laptop. The narrative really pulled me in, and I was anticipating every subsequent scene. I'll admit, it was pretty good.
I would grade it a solid 85%. And that's the best any of these LLMs have produced, IMO.
Also, at this point I would say that Gemini holds a significant lead above the other closed source models. OpenAI wasn't even close and tried its best to just rush through the assignment, providing 99% useless drivel. Claude was extremely generic, and most of its ideas were like someone that only glanced at the assignment before turning in their work. There were tons of mistakes it made simply because it just "ignored" the notes.
Keep in mind, this is for writing, and that based on a specific, complex assignment. Not a general "write me a story about x" prompt, which I suspect is what most people are testing these models on. That's useless for most real writers. We need an LLM that can work based on very detailed and complex parameters, and I believe this is how these LLMs should be truly tested. Under those circumstances, I believe many of you guys will find the real world usage doesn't match the benchmarks.
As a side note, I've tested it out on coding, and it failed repeatedly on all of my tasks. People swear it's the god of coding, but that hasn't been my experience. Perhaps my use cases are too simple, perhaps I'm not prompting right, perhaps it works better for more advanced coders. I really don't know. But I digress.
**Open Source Results:** Sorry guys, but none of the open source apps turned in anything really useful. Some completed the assignment to a degree, but the outputs were often useless, and therefore not worth mentioning. It sucks, because I believe in open source and I'm a big Qwen fan. Maybe Qwen 3 will change things in this department. I hope so. I'll be testing it out when it drops.
If you have any additional suggestions for open source models that you believe can handle the task, let me know.
**Notable Mentions:** ***Gemma-2 Ifable*** "gets it", but it couldn't handle the long context and just completely fell apart very early. But Ifable is consistently my go-to for lower context assignments, sometimes partnered with darkest muse. But Ifable is my personal favorite for these sorts of assignments because it just understands what you're trying to do, pays attention to what you're saying, and - unlike other models - pulls out aspects of the story that are just below the surface and expands upon those ideas, enriching the concepts. Other open source models write well, but ifable is the only model I've used that has the presence of really working with a writer, someone who doesn't just spit out sentences/words, but gets the concepts and tries to build upon them and make them better.
My personal desire is for someone to develop an IFable 2, with a significantly larger context window and increased intelligence, because I think it has the potential to be the best open source writing assistant available.
| 2025-04-21T23:40:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4s70i/gemini_25_the_best_writing_assistant_period/
|
GrungeWerX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4s70i
| false | null |
t3_1k4s70i
|
/r/LocalLLaMA/comments/1k4s70i/gemini_25_the_best_writing_assistant_period/
| false | false |
self
| 7 | null |
Seeking Advice about maintaining RAG + cost
| 0 |
Hey,
I'm a high school junior, and I'm trying to make a document editor that helps you write with AI similar to how Cursor allows you to do the same with coding. Should I maintain a vector db or should I just feed the whole document to the AI? I have a feeling the former is what I should do, but I'm not sure how to implement this. How do I make sure the database is always updated when the user chats with the AI for edits? Also, wouldn't it be incredibly costly to constantly be updating it?
I'm really trying to branch out and learn more about how to make useful tools with AI models, and I want to go deeper than just using an API. Any help would seriously be **greatly** appreciated. Thanks!
| 2025-04-22T00:06:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4sqby/seeking_advice_about_maintaining_rag_cost/
|
IshanRamrakhiani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4sqby
| false | null |
t3_1k4sqby
|
/r/LocalLLaMA/comments/1k4sqby/seeking_advice_about_maintaining_rag_cost/
| false | false |
self
| 0 | null |
So, is it reasonable to expect the next generation of local oriented models to be QAT out of the oven?
| 46 |
With Gemma3 news and posts all around… would next Gen of model’s, Either Dense or MoE, go from 32b to 128b, “QAT’ed” since training, aiming to be deployed in common VRAM sizes of 8-16-24/32 in the end anyway?
Is QAT less resource intense during training, or is the same?
Just elaborating here…
| 2025-04-22T00:16:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4sxh7/so_is_it_reasonable_to_expect_the_next_generation/
|
JLeonsarmiento
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4sxh7
| false | null |
t3_1k4sxh7
|
/r/LocalLLaMA/comments/1k4sxh7/so_is_it_reasonable_to_expect_the_next_generation/
| false | false |
self
| 46 | null |
GLM4-32B can't reliably adhere to Aider prompts, but it's supposedly better than Deepseek R1..?
| 1 |
[removed]
| 2025-04-22T00:25:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4t4po/glm432b_cant_reliably_adhere_to_aider_prompts_but/
|
ForsookComparison
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4t4po
| false | null |
t3_1k4t4po
|
/r/LocalLLaMA/comments/1k4t4po/glm432b_cant_reliably_adhere_to_aider_prompts_but/
| false | false |
self
| 1 | null |
🎯 Built a Tool to Help AI/ML Product Managers Stay Organized — Would Love Feedback!
| 1 |
[removed]
| 2025-04-22T00:26:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4t4vv/built_a_tool_to_help_aiml_product_managers_stay/
|
PearadoxApp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4t4vv
| false | null |
t3_1k4t4vv
|
/r/LocalLLaMA/comments/1k4t4vv/built_a_tool_to_help_aiml_product_managers_stay/
| false | false |
self
| 1 | null |
Polaris - Free GPUs/CPUs for the community
| 1 |
[removed]
| 2025-04-22T00:38:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4tdtr/polaris_free_gpuscpus_for_the_community/
|
hopepatrol
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4tdtr
| false | null |
t3_1k4tdtr
|
/r/LocalLLaMA/comments/1k4tdtr/polaris_free_gpuscpus_for_the_community/
| false | false |
self
| 1 | null |
Chrome extension for summary and chat about websites, plus a question if someone can help
| 5 |
You can load the CRX from here: https://github.com/dylandhall/llm-plugin/releases
Readme here: https://github.com/dylandhall/llm-plugin
it's as configurable as I could make it, you can customise the URL, add an API key, and add/edit the prompts as much as you want.
If no text is selected it'll extract the current page, or it'll use whatever you've selected.
I made it so it keeps the conversation until you clear it, and you can keep asking follow-up questions as much as you like.
I'd like to make it a sidebar-compatible plugin which can source info from many tabs or selections and then provide insights based on the information together. Basically a research assistant. This isn't it but it's a useful first step.
I do have a question, currently I was getting odd results if I left the first system prompt in and tried to continue chatting (it would sort of re-explain it to me) - can you put an updated system prompt in, mid-conversation, or is it beter to swap the initial prompt in these cases?
| 2025-04-22T00:50:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4tm9i/chrome_extension_for_summary_and_chat_about/
|
MrHall
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4tm9i
| false | null |
t3_1k4tm9i
|
/r/LocalLLaMA/comments/1k4tm9i/chrome_extension_for_summary_and_chat_about/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'HhjkgBsLr6kYJphMgK7CqlhbiYlWKqSTE6DHrmxCA0M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=108&crop=smart&auto=webp&s=c96dc0e1a22778f81d4f0eaa054d1b59b3fcd2b6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=216&crop=smart&auto=webp&s=e5b8a9f87b9964b44f56b2bd3c7e78c3887a670d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=320&crop=smart&auto=webp&s=ea334d87715721daa0db19085a9014fdbb6dcd95', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=640&crop=smart&auto=webp&s=0bb8c17db0e26c47f0171837111f7f32c1bfa5a9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=960&crop=smart&auto=webp&s=1ef579916b836ea23a9b3b5878f4a71f7bc61541', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?width=1080&crop=smart&auto=webp&s=a7264c8acefeeeaf34e3cdd888b1a3b331435965', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_UlHy2P1dN98XGYz__iawnvjs3EiWKTrTKcZkC5YB8o.jpg?auto=webp&s=19762daf620402985bf7a520a11c17ad7cedc9b9', 'width': 1200}, 'variants': {}}]}
|
Setup for M1 Mac Mini
| 1 |
[removed]
| 2025-04-22T01:04:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4twpe/setup_for_m1_mac_mini/
|
lopiontheop
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4twpe
| false | null |
t3_1k4twpe
|
/r/LocalLLaMA/comments/1k4twpe/setup_for_m1_mac_mini/
| false | false |
self
| 1 | null |
Using a Thunderbolt eGPU Enclosure to Increase VRAM Availability on my Desktop - My Experience
| 19 |
Hey everyone,
This was a fun experiment and a pretty niche use-case, but I basically had everything sitting around anyway.
My desktop is running an RTX 5080, 32GB of RAM, and a 14700k. It was never built to be an LLM machine, but I figured I'd start experimenting with some smaller models that fit within the VRAM.
I also had an old Razer Core X eGPU enclosure sitting around - and put my 3070 in it.
My current PSU wouldn't have been able to handle both cards plugged directly into the MOBO, and I wasn't about to buy a new PSU just to try this out.
I already had a Thunderbolt 4 (GC Maple Ridge) card in my desktop, so I just needed to hook them all up.
Well I was surprised to see how easy it was for Ollama to just start utilizing all of the GPUs. I changed the OLLAMA_VISIBLE_DEVICES environment variable to "0,1" and OLLAMA_SCHED_SPREAD to "1", and that was about it.
I can go in-depth into findings, but here's generally what I've seen:
1. Models that previously fit in VRAM ran 30-40% slower. That's pretty expected, the bottleneck of TB4 shows a 141GB/s throughput for the 3070, which is much lower than its 481GB/s BUS speed that it can hypothetically hit. So I was bottlenecked immediately. However I'm okay with that because it allows to me to significantly increase the context size for models I was running before, at rates I'm still perfectly happy with (30> tk/s).
2. Models that fit within 24GB of VRAM ran 5-6x better overall. Also expected - even with the TB4 bottleneck, being able to run the entire model in-memory was a massive improvement. As an example, qwq 32b Q4 runs at 13.1tk/s on average with both cards, but gets crushed down to 2.5tk/s on just the 5080.
If I had a 1250W PSU I would love to try hooking it up the 3070 to a motherboard to get a much better idea the TB4 bottleneck. A hypothetical Oculink-supported enclosure + interface would also double my speeds, but that's way more effort to try and lock down.
This makes me curious enough to keep an eye out for 16gb 4060tis, as it would give me 32GB of usable VRAM, which opens up options for much stronger models than the 8b/12b ones I've been running before.
tl;dr - Using an eGPU enclosure with another Nvidia card works on a desktop - assuming you have a thunderbolt connector installed. This makes models that fit in the pooled VRAM space run significantly better than offloading to CPU/RAM, but by default will hinder performance of models that fit in a single card due to TB4 bottlenecks.
| 2025-04-22T01:05:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4txxw/using_a_thunderbolt_egpu_enclosure_to_increase/
|
Anarchaotic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4txxw
| false | null |
t3_1k4txxw
|
/r/LocalLLaMA/comments/1k4txxw/using_a_thunderbolt_egpu_enclosure_to_increase/
| false | false |
self
| 19 | null |
Another Apache LLM-based TTS model in Dia-1.6B, furthering the paradigm
| 1 |
[removed]
| 2025-04-22T01:36:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ujsy/another_apache_llmbased_tts_model_in_dia16b/
|
rzvzn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ujsy
| false | null |
t3_1k4ujsy
|
/r/LocalLLaMA/comments/1k4ujsy/another_apache_llmbased_tts_model_in_dia16b/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'yuM_CJeGOMgftl-KrXMli0qT6ZFtG54hS03QijdRE1w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=108&crop=smart&auto=webp&s=d180922af78915b163d34abfcbfcf125bba856ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=216&crop=smart&auto=webp&s=7250a8beaee8b6474e89f65619591dca3b52dca3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=320&crop=smart&auto=webp&s=11c7f068a96712d1009f7abd2653611a3fb807ca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=640&crop=smart&auto=webp&s=103b0db8861cf69fc88190baa68484f42ae496a9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=960&crop=smart&auto=webp&s=829ae7ce7d5189fc86d44f3ea8f05258bf4fd20d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?width=1080&crop=smart&auto=webp&s=32d04863f6b665a5133db3c7e008910082851cd5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qxVwMbYay8jOzH_FoGF9tcVji6Jqt0qoXZKrUTdRpi0.jpg?auto=webp&s=754e395eb2a2abc78e75e91782698a031a88b01e', 'width': 1200}, 'variants': {}}]}
|
Another Apache TTS model in Dia 1.6B futhers the LLM-based TTS paradigm
| 1 |
[removed]
| 2025-04-22T01:39:26 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ulml
| false | null |
t3_1k4ulml
|
/r/LocalLLaMA/comments/1k4ulml/another_apache_tts_model_in_dia_16b_futhers_the/
| false | false |
default
| 1 | null |
||
Dia 1.6B pushes the LLM-based TTS paradigm forward
| 1 |
[removed]
| 2025-04-22T01:41:17 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4umzk
| false | null |
t3_1k4umzk
|
/r/LocalLLaMA/comments/1k4umzk/dia_16b_pushes_the_llmbased_tts_paradigm_forward/
| false | false |
default
| 1 | null |
||
what is QAT? is it better thn IQ4_XS?
| 1 |
[removed]
| 2025-04-22T02:03:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4v2r7/what_is_qat_is_it_better_thn_iq4_xs/
|
Remarkable-Pea645
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4v2r7
| false | null |
t3_1k4v2r7
|
/r/LocalLLaMA/comments/1k4v2r7/what_is_qat_is_it_better_thn_iq4_xs/
| false | false |
self
| 1 | null |
Dia 1.6B is one of the funnest models I've ever come across.
| 560 | 2025-04-22T02:07:19 |
https://v.redd.it/w2jq98c7oawe1
|
swagonflyyyy
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4v5fm
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/w2jq98c7oawe1/DASHPlaylist.mpd?a=1747879660%2CNmI3YWZkMWM0Yzg1MTAwOTllMTI3Yjg5ZjZlMDlhNGMzNzJmNTA0OGJkNzU1NTQzZWYwMmI5OGQ0NTJkMDZjMQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/w2jq98c7oawe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/w2jq98c7oawe1/HLSPlaylist.m3u8?a=1747879660%2CMGM4MDVhN2YxN2NjOWVkNjkyZDc0NTM0MDc0OTgwNjhlYmI2ZTcxODExOTgwYzEyYmNlM2U5ZmI1MGZlZTAyYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w2jq98c7oawe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1k4v5fm
|
/r/LocalLLaMA/comments/1k4v5fm/dia_16b_is_one_of_the_funnest_models_ive_ever/
| false | false |
nsfw
| 560 |
{'enabled': False, 'images': [{'id': 'OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=108&crop=smart&format=pjpg&auto=webp&s=f058d03f953bbea7c9342fc8f11bf7097c339a19', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=216&crop=smart&format=pjpg&auto=webp&s=7d08474d5a3360124c07a86513fad4111e673edb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=320&crop=smart&format=pjpg&auto=webp&s=c1b5402fe2320e8d998038726f6092354865cc66', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=640&crop=smart&format=pjpg&auto=webp&s=1c480505a01e666e39c121335b1cbb70bddc0b34', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=960&crop=smart&format=pjpg&auto=webp&s=25b753f8b95bd02c486ae249af91256dc4824f76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=1080&crop=smart&format=pjpg&auto=webp&s=15db8c16788c271747023b431bdeedc664398fdc', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?format=pjpg&auto=webp&s=637b8e2ac6aad50c6c857fdfdaae42e8146f2081', 'width': 1280}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=83d3836c1405cf19f4320cb6849a7bcd605380aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=03715e4ed45d78c277d3bb5cc1bf3a09e3600950', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a39bc92478e7a64867a019775716f3204ce025d8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=707cbe7a150c366cc9c6cd0a10743073cf42b314', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=df351fe85dcea044d09356964be14e75fd1a002d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=06b7e2781b1ffa23f99f038cd43eb0e0e2a44b1b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?blur=40&format=pjpg&auto=webp&s=5e10c8cf41f9ab0a576674d72093eaf283ba27ac', 'width': 1280}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=83d3836c1405cf19f4320cb6849a7bcd605380aa', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=03715e4ed45d78c277d3bb5cc1bf3a09e3600950', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a39bc92478e7a64867a019775716f3204ce025d8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=707cbe7a150c366cc9c6cd0a10743073cf42b314', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=df351fe85dcea044d09356964be14e75fd1a002d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=06b7e2781b1ffa23f99f038cd43eb0e0e2a44b1b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OTA5MDhlYzdvYXdlMUK8nIbN_5N-LgxNlR9GYENz_zR3uFavS16v6Gh76MYw.png?blur=40&format=pjpg&auto=webp&s=5e10c8cf41f9ab0a576674d72093eaf283ba27ac', 'width': 1280}}}}]}
|
|
Best ollama model and editor or vscode extension to replace Cursor
| 0 |
Cursor Pro with the Claude 3.7 Sonnet and Gemini 2.5 Pro is good, but I feel it could be a lot better.
Tell me good alternatives, paid or free, local or remote. I have a 3090 and 4060 Ti (40gb in total), so running locally is an option
| 2025-04-22T02:22:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4vg25/best_ollama_model_and_editor_or_vscode_extension/
|
brauliobo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4vg25
| false | null |
t3_1k4vg25
|
/r/LocalLLaMA/comments/1k4vg25/best_ollama_model_and_editor_or_vscode_extension/
| false | false |
self
| 0 | null |
I uploaded GLM-4-32B-0414 & GLM-Z1-32B-0414 Q4_K_M to ollama
| 105 |
# This model requires [Ollama v0.6.6 or later](https://github.com/ollama/ollama/releases)
instruct: `ollama run JollyLlama/GLM-4-32B-0414-Q4_K_M`
reasoning: `ollama run JollyLlama/GLM-Z1-32B-0414-Q4_K_M`
[https://www.ollama.com/JollyLlama/GLM-4-32B-0414-Q4\_K\_M](https://www.ollama.com/JollyLlama/GLM-4-32B-0414-Q4_K_M)
[https://www.ollama.com/JollyLlama/GLM-Z1-32B-0414-Q4\_K\_M](https://www.ollama.com/JollyLlama/GLM-Z1-32B-0414-Q4_K_M)
Thanks to matteo for uploading the fixed gguf to HF
[https://huggingface.co/matteogeniaccio](https://huggingface.co/matteogeniaccio)
https://preview.redd.it/qno0sx5iyawe1.png?width=995&format=png&auto=webp&s=9f21e057c723a0351238aff1e49f1d365368954d
| 2025-04-22T03:05:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4w9p2/i_uploaded_glm432b0414_glmz132b0414_q4_k_m_to/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4w9p2
| false | null |
t3_1k4w9p2
|
/r/LocalLLaMA/comments/1k4w9p2/i_uploaded_glm432b0414_glmz132b0414_q4_k_m_to/
| false | false | 105 |
{'enabled': False, 'images': [{'id': 'R3f2X9-M1f6vyaK6ix_eTUnRiNHS-tOrvv6GFnmeqzU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=108&crop=smart&auto=webp&s=94cd883395552dbceddff9657fd6ea5fcb9de3d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=216&crop=smart&auto=webp&s=5cb55148e69bda4bdc8aa0c383ca5bdd06056b96', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=320&crop=smart&auto=webp&s=305a63691c25ef86987d27413ca929b666518d4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=640&crop=smart&auto=webp&s=46802610d70a0e31fc2090661cc81ad54474c358', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=960&crop=smart&auto=webp&s=9dbe4ab5a5b9080beac3b892f3ccd32fabc26df9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?width=1080&crop=smart&auto=webp&s=484b080e42fa607d0483ebd68fbd8bf135c300fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nNe4kZO07RNpumX0f4yeWEEM85RJxuepCtUYC_PxayA.jpg?auto=webp&s=485683f5eee9414e693c1d976d7026ee534d719b', 'width': 1200}, 'variants': {}}]}
|
|
Does anyone know of a repository of high quality sample voices with descriptions?
| 6 |
I'm looking for as processional sample voices (not celebrities) that come with descriptions, attributes or labels, similar too Elevenlabs. Ex::
Oracle X- An experienced British female voice narrator with a smooth, warm, engaging tone.
Attributes-
Professional Voice Clone
HQ
Labels-
Calm
Middle-Aged
Female
English (British)
Narrative & Story
| 2025-04-22T03:34:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4wt5n/does_anyone_know_of_a_repository_of_high_quality/
|
Erdeem
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4wt5n
| false | null |
t3_1k4wt5n
|
/r/LocalLLaMA/comments/1k4wt5n/does_anyone_know_of_a_repository_of_high_quality/
| false | false |
self
| 6 | null |
Getting the output right
| 1 |
I'm fighting output backticks and can't seem to get my code highlights and indentation and markdown right for Gemma 3 4B quantized 4 bit model. This feels like a problem that has been solved all over the place yet I am struggling. I'm using llama.cpp, flask and fastAPI, langgraph for workflow things, and a custom UI that I'm building that's driving me batshit. I'm trying to make a minimal chatbot to support a RAG service using Sqlite-vec (primary goal)
Help me get out of my yak-shaving, sidequest, BS hell please.
Any tips on making myself less insane are most welcome.
| 2025-04-22T04:01:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4xb12/getting_the_output_right/
|
LaszloTheGargoyle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4xb12
| false | null |
t3_1k4xb12
|
/r/LocalLLaMA/comments/1k4xb12/getting_the_output_right/
| false | false |
self
| 1 | null |
Question about fireworks.ai and groq cloud
| 1 |
[removed]
| 2025-04-22T04:15:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4xkb9/question_about_fireworksai_and_groq_cloud/
|
Boring_Advantage869
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4xkb9
| false | null |
t3_1k4xkb9
|
/r/LocalLLaMA/comments/1k4xkb9/question_about_fireworksai_and_groq_cloud/
| false | false |
self
| 1 | null |
Question about fireworks.ai and groq cloud
| 1 |
[removed]
| 2025-04-22T04:21:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4xo5h/question_about_fireworksai_and_groq_cloud/
|
Boring_Advantage869
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4xo5h
| false | null |
t3_1k4xo5h
|
/r/LocalLLaMA/comments/1k4xo5h/question_about_fireworksai_and_groq_cloud/
| false | false |
self
| 1 | null |
"Wait, no, no. Wait, no." Enough!
| 0 |
Enough with all those "wait", "but" ... it's so boring.
I would like to see some models can generate clean "thoughts". Meaningful thoughts even better and insightful thoughts definitely a killer.
| 2025-04-22T04:30:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4xtbu/wait_no_no_wait_no_enough/
|
foldl-li
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4xtbu
| false | null |
t3_1k4xtbu
|
/r/LocalLLaMA/comments/1k4xtbu/wait_no_no_wait_no_enough/
| false | false |
self
| 0 | null |
how to update the prompt to make the movements more hilarious/unrealistic?
| 1 | 2025-04-22T04:59:39 |
https://youtube.com/watch?v=FVb0n5KQwsk&si=tfp_kTm_emyP5Gay
|
AnimalHobbyStudio
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4yafe
| false |
{'oembed': {'author_name': 'Animal Hobby Studio', 'author_url': 'https://www.youtube.com/@AnimalHobbyStudio', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FVb0n5KQwsk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Epic Skateboarding Animals! 🐹🐻🐱🐶🐹🐼🐕 Shredding Like Pros!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FVb0n5KQwsk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Epic Skateboarding Animals! 🐹🐻🐱🐶🐹🐼🐕 Shredding Like Pros!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1k4yafe
|
/r/LocalLLaMA/comments/1k4yafe/how_to_update_the_prompt_to_make_the_movements/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'RQDkYJokMHLct8xSgSctMhPvhIpnumU2Smbw4Ha1JAM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EJOqSjhW6lXN8lktabPYR6AAY-GuPpakdNjJ49wb-ds.jpg?width=108&crop=smart&auto=webp&s=8c3a318499b5b470dfc2a484545884de921c84c3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EJOqSjhW6lXN8lktabPYR6AAY-GuPpakdNjJ49wb-ds.jpg?width=216&crop=smart&auto=webp&s=ea5a0575155e717b0a2a938982cadba2911e9cb2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EJOqSjhW6lXN8lktabPYR6AAY-GuPpakdNjJ49wb-ds.jpg?width=320&crop=smart&auto=webp&s=45473fea1616222745a83c02dda9294e10304c3a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EJOqSjhW6lXN8lktabPYR6AAY-GuPpakdNjJ49wb-ds.jpg?auto=webp&s=39eb5246a4eb903a3dd55030a8d1c62da3367eea', 'width': 480}, 'variants': {}}]}
|
||
Prompt IA avec sa méthode d'élaboration
| 1 |
[removed]
| 2025-04-22T05:07:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4yf9t/prompt_ia_avec_sa_méthode_délaboration/
|
OppositeYou3884
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4yf9t
| false | null |
t3_1k4yf9t
|
/r/LocalLLaMA/comments/1k4yf9t/prompt_ia_avec_sa_méthode_délaboration/
| false | false |
self
| 1 | null |
🚀 Et si la vraie clé pour connecter l’humain et l’IA résidait dans… une simple question ?
| 1 |
[removed]
| 2025-04-22T05:27:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4yqdh/et_si_la_vraie_clé_pour_connecter_lhumain_et_lia/
|
OppositeYou3884
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4yqdh
| false | null |
t3_1k4yqdh
|
/r/LocalLLaMA/comments/1k4yqdh/et_si_la_vraie_clé_pour_connecter_lhumain_et_lia/
| false | false |
self
| 1 | null |
Help, how do people deploy their custom models?
| 1 |
[removed]
| 2025-04-22T05:33:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ytni/help_how_do_people_deploy_their_custom_models/
|
ComprehensiveMeal311
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ytni
| false | null |
t3_1k4ytni
|
/r/LocalLLaMA/comments/1k4ytni/help_how_do_people_deploy_their_custom_models/
| false | false |
self
| 1 | null |
Why Do People Dislike Ollama?
| 1 |
[removed]
| 2025-04-22T05:33:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ytpx/why_do_people_dislike_ollama/
|
Unique_Mode_3362
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ytpx
| false | null |
t3_1k4ytpx
|
/r/LocalLLaMA/comments/1k4ytpx/why_do_people_dislike_ollama/
| false | false | 1 | null |
|
wth is gemma doing
| 1 |
[removed]
| 2025-04-22T05:35:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4yuhf/wth_is_gemma_doing/
|
OkCartoonist8094
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4yuhf
| false | null |
t3_1k4yuhf
|
/r/LocalLLaMA/comments/1k4yuhf/wth_is_gemma_doing/
| false | false | 1 | null |
|
Help, how do people deploy their custom models?
| 1 |
[removed]
| 2025-04-22T05:40:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4yxa4/help_how_do_people_deploy_their_custom_models/
|
Int_Angel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4yxa4
| false | null |
t3_1k4yxa4
|
/r/LocalLLaMA/comments/1k4yxa4/help_how_do_people_deploy_their_custom_models/
| false | false |
self
| 1 | null |
Is there a way to generate cpp code from PyTorch models?
| 1 |
[removed]
| 2025-04-22T05:57:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4z6ky/is_there_a_way_to_generate_cpp_code_from_pytorch/
|
Quiet-Chocolate6407
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4z6ky
| false | null |
t3_1k4z6ky
|
/r/LocalLLaMA/comments/1k4z6ky/is_there_a_way_to_generate_cpp_code_from_pytorch/
| false | false |
self
| 1 | null |
koboldcpp-rocm lags out the entire PC on Linux but not on Windows
| 0 |
Hey guys, I'm using a 6800 XT with ROCm/hipblas for LLM inference via koboldcpp-rocm. I'm running gemma 3 12b Q8 with 6k context and with all 49 layers offloaded to the GPU. This works flawlessly on Windows without any issues at all. When I ran the exact same configuration on Linux (Ubuntu 24), it's lagging out my entire PC.
By "lagging out", I mean that everything becomes completely unresponsive for 5 seconds on repeat, kinda like how it is when CPU/RAM is at 100% capacity. Keep in mind that this is before I start the chat so the GPU isn't being utilized, it's just the video memory that's allocated. I'm not sure why this is happening on Linux. I've tried disabling BLAS since it was mentioned in the github README but that didn't change anything.
Should I switch over to ollama or is there a fix/workaround for this? The inference speed is incredible when my PC unfreezes and lets the LLM run.
| 2025-04-22T06:11:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k4ze9z/koboldcpprocm_lags_out_the_entire_pc_on_linux_but/
|
logseventyseven
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4ze9z
| false | null |
t3_1k4ze9z
|
/r/LocalLLaMA/comments/1k4ze9z/koboldcpprocm_lags_out_the_entire_pc_on_linux_but/
| false | false |
self
| 0 | null |
An Easy-to-use Knowledge Editing Framework for LLMs.
| 20 | 2025-04-22T06:38:28 |
https://github.com/zjunlp/EasyEdit
|
ninjasaid13
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k4zs4i
| false | null |
t3_1k4zs4i
|
/r/LocalLLaMA/comments/1k4zs4i/an_easytouse_knowledge_editing_framework_for_llms/
| false | false | 20 |
{'enabled': False, 'images': [{'id': 'bHV93rIOqKnywSfKybYnp7MLTiov9BecVDjqUdV6xl8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=108&crop=smart&auto=webp&s=7af4d27dd6659a8813423347c617548db612ae88', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=216&crop=smart&auto=webp&s=79d3143a6946bf9cb0ff16619ab39992e54a408c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=320&crop=smart&auto=webp&s=a312940c0f6abad06c99321c4110c72f416e85ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=640&crop=smart&auto=webp&s=c3795dcfdd6bdfc7f874246e3aaf38faeb4539d8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=960&crop=smart&auto=webp&s=3b7e3dc17ca7d855d96b7127b085c349041de3c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?width=1080&crop=smart&auto=webp&s=7091dbe14e0841ea6ed9774411cc1bc0fd0f67df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z5X2wY3bbph0pn54BXd1X3klLMzXze22WcY9yNGnCV4.jpg?auto=webp&s=b6b57b7337be91e065562c944a4b220a1e0d5201', 'width': 1200}, 'variants': {}}]}
|
||
Looking for an open-source TTS model with strong emotional expressiveness (commercial use)
| 1 |
[removed]
| 2025-04-22T06:54:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k500nz/looking_for_an_opensource_tts_model_with_strong/
|
ConnectPea8944
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k500nz
| false | null |
t3_1k500nz
|
/r/LocalLLaMA/comments/1k500nz/looking_for_an_opensource_tts_model_with_strong/
| false | false |
self
| 1 | null |
Why do people dislike Ollama?
| 1 |
[removed]
| 2025-04-22T06:57:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k502ew/why_do_people_dislike_ollama/
|
Mother-Acadia581
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k502ew
| false | null |
t3_1k502ew
|
/r/LocalLLaMA/comments/1k502ew/why_do_people_dislike_ollama/
| false | false | 1 | null |
|
Can't get ollama to connect with the agents
| 1 |
[removed]
| 2025-04-22T07:44:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k50pdd/cant_get_ollama_to_connect_with_the_agents/
|
Optimal_Value6946
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k50pdd
| false | null |
t3_1k50pdd
|
/r/LocalLLaMA/comments/1k50pdd/cant_get_ollama_to_connect_with_the_agents/
| false | false |
self
| 1 | null |
Sleep-time Compute: Beyond Inference Scaling at Test-time
| 25 | 2025-04-22T07:54:24 |
https://arxiv.org/abs/2504.13171
|
newdoria88
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1k50u8i
| false | null |
t3_1k50u8i
|
/r/LocalLLaMA/comments/1k50u8i/sleeptime_compute_beyond_inference_scaling_at/
| false | false |
default
| 25 | null |
|
Veiled Rose 22B : Bigger, Smarter and Noicer
| 41 |
If youve tried my [Veiled Calla 12B](https://huggingface.co/soob3123/Veiled-Calla-12B) you know how it goes. but since it was a 12B model, there were some pretty obvious short comings.
Here is the Mistral Based 22B model, with better cognition and reasoning. Test it out and let me your feedback!
Model: [soob3123/Veiled-Rose-22B · Hugging Face ](https://huggingface.co/soob3123/Veiled-Rose-22B)
GGUF: [soob3123/Veiled-Rose-22B-gguf · Hugging Face](https://huggingface.co/soob3123/Veiled-Rose-22B-gguf)[](https://huggingface.co/soob3123/Veiled-Rose-22B)
| 2025-04-22T07:59:30 |
Reader3123
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k50wo0
| false | null |
t3_1k50wo0
|
/r/LocalLLaMA/comments/1k50wo0/veiled_rose_22b_bigger_smarter_and_noicer/
| false | false | 41 |
{'enabled': True, 'images': [{'id': 'XHhTLMNKgcDzZO7SDSbfcD8kN2LrzR4TuhdEABjWv9o', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/99f47p7cecwe1.png?width=108&crop=smart&auto=webp&s=27b42cc40402f734ded7014d5697aec291763362', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/99f47p7cecwe1.png?width=216&crop=smart&auto=webp&s=6366ca101b59827a437894228b9a267f00276149', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/99f47p7cecwe1.png?width=320&crop=smart&auto=webp&s=d000e1fd8515dea1b86173ff5f4552082ae75f47', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/99f47p7cecwe1.png?width=640&crop=smart&auto=webp&s=f01384216591c1b49b1cd83d95948d1d577c9571', 'width': 640}], 'source': {'height': 640, 'url': 'https://preview.redd.it/99f47p7cecwe1.png?auto=webp&s=bfd9ec1a95e4271c60d25a67a3f9db9dabaf4e60', 'width': 640}, 'variants': {}}]}
|
||
Whom are you supporting in this battleground?
| 0 | 2025-04-22T08:18:23 |
iamnotdeadnuts
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k515zw
| false | null |
t3_1k515zw
|
/r/LocalLLaMA/comments/1k515zw/whom_are_you_supporting_in_this_battleground/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'R6i4P4z7LEBqO7I1OehJ3CP0H2qwBMq0GOjb-PRWQU0', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=108&crop=smart&auto=webp&s=78452606ecf4754c4dbea676b5f4c2d8d1846d84', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=216&crop=smart&auto=webp&s=b0b1bff18b8d1db841d06c1a90bb5fd526c3a67c', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=320&crop=smart&auto=webp&s=a11380d20d152473ac9035ec76e271fa706e5e71', 'width': 320}, {'height': 244, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=640&crop=smart&auto=webp&s=8cb2e3a64026270575ba84371d2aeddf74f59178', 'width': 640}, {'height': 366, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=960&crop=smart&auto=webp&s=c1db145f2e6dfcfd3ea10e8ec58b79f01c90bc74', 'width': 960}, {'height': 412, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?width=1080&crop=smart&auto=webp&s=7c252bdf5fdddab4ce78c806cbf5c2f7cb70e6d7', 'width': 1080}], 'source': {'height': 1454, 'url': 'https://preview.redd.it/v45qmz9cicwe1.png?auto=webp&s=99c1f932341f924c520be67b75998bb9237e510e', 'width': 3808}, 'variants': {}}]}
|
|||
GB300 Bandwidth
| 0 |
Hello,
I've been looking at the Dell Pro Max with GB300. It has 288GB of HBME3e memory +496GB LPDDR5X CPU memory.
HBME3e memory has a bandwidth of 1.2TB/s. I expected more bandwidth for Blackwell. Have I missed some detail?
| 2025-04-22T09:22:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k520sa/gb300_bandwidth/
|
TechnicalGeologist99
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k520sa
| false | null |
t3_1k520sa
|
/r/LocalLLaMA/comments/1k520sa/gb300_bandwidth/
| false | false |
self
| 0 | null |
Why is MythoMax13B still in high demand?
| 79 |
I recently noticed, that MythoMax13B is really high ranked on openrouter in the RPG section and has high demand. That makes no sense to me, as it is a still a Llama2 era model. Is that model so good or is it promoted in the openrouter chat rooms or on other platforms actively, but even if that is the reason it makes no sense. Why didn't they then use modern RP models and stick to that one, can someone who played with that model answer it? Is it just that good or brings still using a L2 other benefits I don't see at the moment? Thanks.
| 2025-04-22T09:54:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k52h4n/why_is_mythomax13b_still_in_high_demand/
|
Consistent_Winner596
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k52h4n
| false | null |
t3_1k52h4n
|
/r/LocalLLaMA/comments/1k52h4n/why_is_mythomax13b_still_in_high_demand/
| false | false |
self
| 79 | null |
Re Recommendations for tools/service or SLM to provide privacy filtering of prompts prior to providing them to LLMs such as ?
| 1 |
[removed]
| 2025-04-22T10:13:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k52qqc/re_recommendations_for_toolsservice_or_slm_to/
|
Vegetable-Score-3915
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k52qqc
| false | null |
t3_1k52qqc
|
/r/LocalLLaMA/comments/1k52qqc/re_recommendations_for_toolsservice_or_slm_to/
| false | false |
self
| 1 | null |
Gemma3 27b QAT: impossible to change context size ?
| 2 |
Hello,I’ve been trying to reduce NVRAM usage to fit the 27b model version into my 20Gb GPU memory. I’ve tried to generate a new model from the “new” Gemma3 QAT version with Ollama:
ollama show gemma3:27b --modelfile > 27b.Modelfile
I edit the `Modelfile` to change the context size:
FROM gemma3:27b
TEMPLATE """{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if or (eq .Role "user") (eq .Role "system") }}<start_of_turn>user
{{ .Content }}<end_of_turn>
{{ if $last }}<start_of_turn>model
{{ end }}
{{- else if eq .Role "assistant" }}<start_of_turn>model
{{ .Content }}{{ if not $last }}<end_of_turn>
{{ end }}
{{- end }}
{{- end }}"""
PARAMETER stop <end_of_turn>
PARAMETER temperature 1
PARAMETER top_k 64
PARAMETER top_p 0.95
PARAMETER num_ctx 32768
LICENSE """<...>"""
And create a new model:
ollama create gemma3:27b-32k -f 27b.Modelfile
Run it and show info:
ollama run gemma3:27b-32k
>>> /show info
Model
architecture gemma3
parameters 27.4B
context length 131072
embedding length 5376
quantization Q4_K_M
Capabilities
completion
vision
Parameters
temperature 1
top_k 64
top_p 0.95
num_ctx 32768
stop "<end_of_turn>"
`num_ctx` is OK, but no change for `context length` (note in the orignal version, there is no `num_ctx` parameter)
Memory usage (`ollama ps`):
NAME ID SIZE PROCESSOR UNTIL
gemma3:27b-32k 178c1f193522 27 GB 26%/74% CPU/GPU 4 minutes from now
With the original version:
NAME ID SIZE PROCESSOR UNTIL
gemma3:27b a418f5838eaf 24 GB 16%/84% CPU/GPU 4 minutes from now
Where’s the glitch ?
| 2025-04-22T10:29:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k52zd4/gemma3_27b_qat_impossible_to_change_context_size/
|
yeswearecoding
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k52zd4
| false | null |
t3_1k52zd4
|
/r/LocalLLaMA/comments/1k52zd4/gemma3_27b_qat_impossible_to_change_context_size/
| false | false |
self
| 2 | null |
Introduction and clarification of the Karcher-merge method!
| 1 |
[removed]
| 2025-04-22T11:21:37 |
https://www.reddit.com/gallery/1k53tjr
|
Mean_Language_3482
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k53tjr
| false | null |
t3_1k53tjr
|
/r/LocalLLaMA/comments/1k53tjr/introduction_and_clarification_of_the/
| false | false | 1 | null |
|
New Library for process PDFs
| 1 |
[deleted]
| 2025-04-22T11:42:23 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k546sn
| false | null |
t3_1k546sn
|
/r/LocalLLaMA/comments/1k546sn/new_library_for_process_pdfs/
| false | false |
default
| 1 | null |
||
THUDM/SWE-Dev-9B · Hugging Face
| 103 |
The creators of the GLM-4 models released a collection of coder models
* SWE-Dev-7B (Qwen-2.5-7B-Instruct): [https://huggingface.co/THUDM/SWE-Dev-7B/](https://huggingface.co/THUDM/SWE-Dev-7B/)
* SWE-Dev-9B (GLM-4-9B-Chat): [https://huggingface.co/THUDM/SWE-Dev-9B/](https://huggingface.co/THUDM/SWE-Dev-9B/)
* SWE-Dev-32B (Qwen-2.5-32B-Instruct): [https://huggingface.co/THUDM/SWE-Dev-32B/](https://huggingface.co/THUDM/SWE-Dev-32B/)
| 2025-04-22T11:42:23 |
https://huggingface.co/THUDM/SWE-Dev-9B
|
bobby-chan
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k546sq
| false | null |
t3_1k546sq
|
/r/LocalLLaMA/comments/1k546sq/thudmswedev9b_hugging_face/
| false | false | 103 |
{'enabled': False, 'images': [{'id': 'CbQJqmSvowa-Kofc5LJG_kDP3cm0ox9tNIHObTQZlzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=108&crop=smart&auto=webp&s=4ba2c67f7cdb1dc8c964fcd23a49f8389620a1bf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=216&crop=smart&auto=webp&s=4f7b6ae56e695e7997033ebbdf946a5ef216dff2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=320&crop=smart&auto=webp&s=46a9bd5bbad424ef506162f92201101bd58df749', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=640&crop=smart&auto=webp&s=33500f8d40e8a012a0d58c9cc5020fee1f8beadd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=960&crop=smart&auto=webp&s=8c78c6673d603a787468be6a72f785048863e677', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?width=1080&crop=smart&auto=webp&s=dd656fb0118c045efbf96d61f7b2d9875fcdee98', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/H-WqhcnMRaUejHpQLqeLIABkVOOs-PNNxG0QszsYGF4.jpg?auto=webp&s=504d0003186d14a8c640d3ddc4dc10d0f9bc9d61', 'width': 1200}, 'variants': {}}]}
|
|
New Lib to process PDFs
| 50 |
Hey everyone, I built a library over the holiday that converts PDF documents to Markdown. It segments by page, extracts relevant elements like titles, images, and tables, and even counts tokens per page. ([AlcheMark](https://github.com/matthsena/AlcheMark))
Some advantages compared to competitors (Docling):
* **Performance**: In my test with a 500-page file, this library parsed it in 45 seconds. Docling around 3 minutes.
* **References**: Docling convert the entire file into a single large Markdown block without page segmentation, making it harder for LLMs to reference which page the information came from. This library returns a vector of objects—one for each page.
* **Token estimation**: The library shows the token count for each page, allowing better cost estimation before sending a prompt.
For this project, I make a ensemble of several existing libraries with a different approach to data handling.
If you'd like to contribute or support the project, feel free to leave a star on GitHub:
[https://github.com/matthsena/AlcheMark](https://github.com/matthsena/AlcheMark)
| 2025-04-22T11:43:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k547al/new_lib_to_process_pdfs/
|
Electronic-Lab-7343
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k547al
| false | null |
t3_1k547al
|
/r/LocalLLaMA/comments/1k547al/new_lib_to_process_pdfs/
| false | false |
self
| 50 |
{'enabled': False, 'images': [{'id': 'Ktf3Kq6PORGD6jld7OhRqCz_9njA15DK25L3MQtCaVg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=108&crop=smart&auto=webp&s=4812e93d3c308c6e0577884006e6d504c7f2811b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=216&crop=smart&auto=webp&s=25c5a352ebe07c3ff9d3f812fb48018f516fddb0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=320&crop=smart&auto=webp&s=1aebebbe91f4a9b0ec31e9a62a3ae05ca184644f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=640&crop=smart&auto=webp&s=8c0e35561f7c483d208d5747a642c2e70010f061', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=960&crop=smart&auto=webp&s=475244e00033a212ea67e1b0da3ec9b2f737b67f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?width=1080&crop=smart&auto=webp&s=c2e92c51d45cb728fa8b77f1441308dfa2f9b302', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l4gtItKLpg3QmDwPvK1g_JJ1uaBBRV0ggPyhxIMWPUk.jpg?auto=webp&s=c66c9c1e47b505b06dd9bb67d58edbdda74f73f2', 'width': 1200}, 'variants': {}}]}
|
Let us build DeepSeek from Scratch | No fluff | 13 lectures uploaded
| 229 |
[A few notes I made as part of this playlist](https://i.redd.it/5w0lu5m2ldwe1.gif)
“Can I build the DeepSeek architecture and model myself, from scratch?”
You can. You need to know the nuts and bolts.
4 weeks back, we launched our playlist: “Build DeepSeek from Scratch”
Until now, we have uploaded 13 lectures in this playlist:
(1) DeepSeek series introduction: [https://youtu.be/QWNxQIq0hMo](https://youtu.be/QWNxQIq0hMo)
(2) DeepSeek basics: [https://youtu.be/WjhDDeZ7DvM](https://youtu.be/WjhDDeZ7DvM)
(3) Journey of a token into the LLM architecture: [https://youtu.be/rkEYwH4UGa4](https://youtu.be/rkEYwH4UGa4)
(4) Attention mechanism explained in 1 hour: [https://youtu.be/K45ze9Yd5UE](https://youtu.be/K45ze9Yd5UE)
(5) Self Attention Mechanism - Handwritten from scratch: [https://youtu.be/s8mskq-nzec](https://youtu.be/s8mskq-nzec)
(6) Causal Attention Explained: Don't Peek into the Future: [https://youtu.be/c6Kkj6iLeBg](https://youtu.be/c6Kkj6iLeBg)
(7) Multi-Head Attention Visually Explained: [https://youtu.be/qbN4ulK-bZA](https://youtu.be/qbN4ulK-bZA)
(8) Multi-Head Attention Handwritten from Scratch: [https://youtu.be/rvsEW-EsD-Y](https://youtu.be/rvsEW-EsD-Y)
(9) Key Value Cache from Scratch: [https://youtu.be/IDwTiS4\_bKo](https://youtu.be/IDwTiS4_bKo)
(10) Multi-Query Attention Explained: [https://youtu.be/Z6B51Odtn-Y](https://youtu.be/Z6B51Odtn-Y)
(11) Understand Grouped Query Attention (GQA): [https://youtu.be/kx3rETIxo4Q](https://youtu.be/kx3rETIxo4Q)
(12) Multi-Head Latent Attention From Scratch: [https://youtu.be/NlDQUj1olXM](https://youtu.be/NlDQUj1olXM)
(13) Multi-Head Latent Attention Coded from Scratch in Python: [https://youtu.be/mIaWmJVrMpc](https://youtu.be/mIaWmJVrMpc)
Next to come:
\- Rotary Positional Encoding (RoPE)
\- DeepSeek MLA + RoPE
\- DeepSeek Mixture of Experts (MoE)
\- Multi-token Prediction (MTP)
\- Supervised Fine-Tuning (SFT)
\- Group Relative Policy Optimisation (GRPO)
\- DeepSeek PTX innovation
This playlist won’t be a 1 hour or 2 hour video. This will be a mega playlist of 35-40 videos with a duration of 40+ hours.
I have made this with a lot of passion.
Would look forward to support and your feedback!
| 2025-04-22T11:55:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k54foj/let_us_build_deepseek_from_scratch_no_fluff_13/
|
OtherRaisin3426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54foj
| false |
{'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QWNxQIq0hMo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build DeepSeek from Scratch: Series Introduction"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QWNxQIq0hMo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build DeepSeek from Scratch: Series Introduction', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1k54foj
|
/r/LocalLLaMA/comments/1k54foj/let_us_build_deepseek_from_scratch_no_fluff_13/
| false | false | 229 |
{'enabled': False, 'images': [{'id': 'KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pAa68GpmjnpZeahm_YMGQkYTs9KtW9HemhGbAYHU02s.jpg?width=108&crop=smart&auto=webp&s=78109588b8263ac9aa6ea3f4343a7920e11ba454', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pAa68GpmjnpZeahm_YMGQkYTs9KtW9HemhGbAYHU02s.jpg?width=216&crop=smart&auto=webp&s=b678d32c544f35db3f51788900e1cf7e5fbe864f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pAa68GpmjnpZeahm_YMGQkYTs9KtW9HemhGbAYHU02s.jpg?width=320&crop=smart&auto=webp&s=555355166a247eb92939344c89b96ed48dd7655a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pAa68GpmjnpZeahm_YMGQkYTs9KtW9HemhGbAYHU02s.jpg?auto=webp&s=9913fecf43c43e22340649979fd9a798eecda2dc', 'width': 480}, 'variants': {}}]}
|
|
Stanford CS 25 Transformers Course (OPEN TO EVERYBODY)
| 1 |
[removed]
| 2025-04-22T12:01:46 |
https://web.stanford.edu/class/cs25/
|
MLPhDStudent
|
web.stanford.edu
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54jxr
| false | null |
t3_1k54jxr
|
/r/LocalLLaMA/comments/1k54jxr/stanford_cs_25_transformers_course_open_to/
| false | false |
default
| 1 | null |
Stanford CS 25 Transformers Course (OPEN TO EVERYBODY)
| 1 |
[removed]
| 2025-04-22T12:02:33 |
https://web.stanford.edu/class/cs25/
|
MLPhDStudent
|
web.stanford.edu
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54kjr
| false | null |
t3_1k54kjr
|
/r/LocalLLaMA/comments/1k54kjr/stanford_cs_25_transformers_course_open_to/
| false | false |
default
| 1 | null |
Stanford CS 25 Transformers Course (OPEN TO EVERYBODY)
| 102 |
**Tl;dr: One of Stanford's hottest seminar courses. We open the course through Zoom to the public. Lectures on Tuesdays, 3-4:20pm PDT (Zoom link on course website). Talks will be recorded and released \~3 weeks after each lecture. Course website:** [**https://web.stanford.edu/class/cs25/**](https://web.stanford.edu/class/cs25/)
Our lecture later **today at 3pm PDT** is **Eric Zelikman from xAI**, discussing “We're All in this Together: Human Agency in an Era of Artificial Agents”. **This talk will NOT be recorded!**
Each week, we invite folks at the forefront of Transformers research to discuss the latest breakthroughs, from LLM architectures like GPT and Gemini to creative use cases in generating art (e.g. DALL-E and Sora), biology and neuroscience applications, robotics, and so forth!
We invite the coolest speakers such as **Andrej Karpathy, Geoffrey Hinton, Jim Fan, Ashish Vaswani**, and folks from **OpenAI, Google, NVIDIA**, etc.
The **recording of the first lecture** is released! **Check it out** [**here**](https://www.youtube.com/watch?v=JKbtWimlzAE)**.** We gave a brief overview of Transformers, discussed pretraining (focusing on data strategies \[[1](https://arxiv.org/abs/2408.03617),[2](https://arxiv.org/abs/2412.15285)\]) and post-training, and highlighted recent trends, applications, and remaining challenges/weaknesses of Transformers. Slides are [here](https://docs.google.com/presentation/d/16tMMBUjPnqw-PvxF8xzu2m1Epdo1fH7nXWlt3mt2q5w/edit?usp=sharing).
Check out our course website for more!
| 2025-04-22T12:05:15 |
https://web.stanford.edu/class/cs25/
|
MLPhDStudent
|
web.stanford.edu
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54mdz
| false | null |
t3_1k54mdz
|
/r/LocalLLaMA/comments/1k54mdz/stanford_cs_25_transformers_course_open_to/
| false | false |
default
| 102 | null |
Using LLM for Survey Response Themes
| 1 |
[removed]
| 2025-04-22T12:07:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k54o3d/using_llm_for_survey_response_themes/
|
Short-State-2017
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54o3d
| false | null |
t3_1k54o3d
|
/r/LocalLLaMA/comments/1k54o3d/using_llm_for_survey_response_themes/
| false | false |
self
| 1 | null |
Using LLM for Survey Response Themes
| 1 |
[removed]
| 2025-04-22T12:21:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k54x5x/using_llm_for_survey_response_themes/
|
Short-State-2017
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k54x5x
| false | null |
t3_1k54x5x
|
/r/LocalLLaMA/comments/1k54x5x/using_llm_for_survey_response_themes/
| false | false |
self
| 1 | null |
AI Talkshow Primiere
| 1 |
[removed]
| 2025-04-22T12:32:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k554o3/ai_talkshow_primiere/
|
TomorrowExcellent194
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k554o3
| false | null |
t3_1k554o3
|
/r/LocalLLaMA/comments/1k554o3/ai_talkshow_primiere/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'D5lFMqkyDRnRneDf8v3qJI6--2HBblbn_6BiAqC9Jo0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ovLMvKQ0KTXrex7fi553JAhBjb2woLzdSQ_UTGYSZwM.jpg?width=108&crop=smart&auto=webp&s=e7a8323d7dadf553971a7a624294311f08a48e07', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ovLMvKQ0KTXrex7fi553JAhBjb2woLzdSQ_UTGYSZwM.jpg?width=216&crop=smart&auto=webp&s=adcd50755de4ab331cb7636da5bf15757f264cc6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ovLMvKQ0KTXrex7fi553JAhBjb2woLzdSQ_UTGYSZwM.jpg?width=320&crop=smart&auto=webp&s=d27ced3d9610b8a11c3d2431f36eee44167d1be3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ovLMvKQ0KTXrex7fi553JAhBjb2woLzdSQ_UTGYSZwM.jpg?auto=webp&s=79a24a834a9da1169a7d3170465e07d4ef201d13', 'width': 480}, 'variants': {}}]}
|
Gemma3:12b hallucinating when reading images, anyone else?
| 30 |
I am running the gemma3:12b model (tried the base model, and also the qat model) on ollama (with OpenWeb UI).
And it looks like it massively hallucinates, it even does the math wrong and occasionally (actually quite often) attempts to add in random PC parts to the list.
I see many people claiming that it is a breakthrough for OCR, but I feel like it is unreliable. Is it just my setup?
Rig: 5070TI with 16GB Vram
| 2025-04-22T12:45:48 |
https://www.reddit.com/gallery/1k55eeo
|
just-crawling
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k55eeo
| false | null |
t3_1k55eeo
|
/r/LocalLLaMA/comments/1k55eeo/gemma312b_hallucinating_when_reading_images/
| false | false | 30 | null |
|
Have you tried a Ling-Lite-0415 MoE (16.8b total, 2.75b active) model?, it is fast even without GPU, about 15-20 tps with 32k context (128k max) on Ryzen 5 5500, fits in 16gb RAM at Q5. Smartness is about 7b-9b class models, not bad at deviant creative tasks.
| 223 |
Qs - [https://huggingface.co/bartowski/inclusionAI\_Ling-lite-0415-GGUF](https://huggingface.co/bartowski/inclusionAI_Ling-lite-0415-GGUF)
I'm keeping an eye on small MoE models that can run on a rock, when even a toaster is too hi-end, and so far this is really promising, before that small MoE models were not that great - unstable, repetitive etc, but this one is just an okay MoE alternative to 7-9b models.
It is not mind blowing, not SOTA, but it can work on low end CPU with limited RAM at great speed.
\-It can fit in 16gb of total RAM.
\-Really fast 15-20 tps on Ryzen 5 5500 6\\12 cpu.
\-30-40 tps on 3060 12gb.
\-128k of context that is really memory efficient.
\-Can run on a phone with 12gb RAM at Q4 (32k context).
\-Stable, without Chinese characters, loops etc.
\-Can be violent and evil, love to swear.
\-Without strong positive bias.
\-Easy to uncensor.
\-Since it is a MoE with small bits of 2.75bs it have not a lot of real world data in it.
\-Need internet search, RAG or context if you need to work with something specific.
\-Prompt following is fine but not at 12+ level, but it really trying its best for all it 2.75b.
\-Performance is about 7-9b models, but creative tasks feels more at 9-12b level.
Just want to share an interesting non-standard GPU bound model.
| 2025-04-22T13:10:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k55x70/have_you_tried_a_linglite0415_moe_168b_total_275b/
|
-Ellary-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k55x70
| false | null |
t3_1k55x70
|
/r/LocalLLaMA/comments/1k55x70/have_you_tried_a_linglite0415_moe_168b_total_275b/
| false | false |
self
| 223 |
{'enabled': False, 'images': [{'id': 'M8LuXkd5MZcIpxC-uoWM2nVguUa8c25jVyS_e-oVKK8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=108&crop=smart&auto=webp&s=e48c60123cd5f2997b511cca2be3b9182d3d495d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=216&crop=smart&auto=webp&s=44d5b871faeb82c8acdc5a5221746183980f705e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=320&crop=smart&auto=webp&s=46a543b450bf34c57694eaee4f4245f8d4aceb4a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=640&crop=smart&auto=webp&s=7b8a85eb2db27fb5a5806a3c78f99c5711bc55f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=960&crop=smart&auto=webp&s=6a2c11a267535d5a85fd44c955fa418a1d31dc08', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?width=1080&crop=smart&auto=webp&s=dae8879d1921485990dead0beb54c235ecc66e7b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QgUBOSTlRVoCBfCo5vpJ2GiAZEEzXtIpyfGks42Jcrs.jpg?auto=webp&s=3f1613ed76af91fafe1f6cd86dd26fbd9a431376', 'width': 1200}, 'variants': {}}]}
|
4090 48GB from a chinese seller or dropshipper
| 1 |
[removed]
| 2025-04-22T13:13:00 |
https://v.redd.it/lrlyrvtzxdwe1
|
AlexandraMaryWindsor
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k55z1z
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lrlyrvtzxdwe1/DASHPlaylist.mpd?a=1747919596%2CNjUwY2U0Nzk3MGQzNDVkNjMyMTViZDZjYmFjOWRmYjkxMTE4Zjk5ZjlkMDYwZWQ0NGJkMjhhMDliNmUzNmE2Mg%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/lrlyrvtzxdwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lrlyrvtzxdwe1/HLSPlaylist.m3u8?a=1747919596%2CYTRlNGExZDg3ZjlmMjhmOTBmZThiNThkYTU2OGEyOTM2MzQ3MGQ1MDQ1MTg0NzViZWY0OWQwNDRhZjYxNDNiNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lrlyrvtzxdwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k55z1z
|
/r/LocalLLaMA/comments/1k55z1z/4090_48gb_from_a_chinese_seller_or_dropshipper/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=108&crop=smart&format=pjpg&auto=webp&s=95314f4dfbdcdc70ea8af569251423f7ed485733', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=216&crop=smart&format=pjpg&auto=webp&s=1476a6bf9b9a8eb90e1dffc3de3b960c1c9ffe72', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=320&crop=smart&format=pjpg&auto=webp&s=22d2846970c8cd55ff0ca2bd0d5ccd0f0b3a9e74', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=640&crop=smart&format=pjpg&auto=webp&s=4aa04e5c1c60ea3890065037a2b27e78505caa4c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=960&crop=smart&format=pjpg&auto=webp&s=253ff7d9abb42b474a06c7d71cc72b1363f404a4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e10b995b8829ec30688282ac8afd858b338ebc1d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZzRidDRidnp4ZHdlMbJACOA8Fbi4cCrN4G5cWUK3dDuH21UfZYR9zIXnDcY2.png?format=pjpg&auto=webp&s=9b7ae60379a4b71eb59ed782c87881af2f2e463b', 'width': 1920}, 'variants': {}}]}
|
|
Ecne AI Podcaster - Automated Research, TTS, Video Generation
| 17 |
Ecne AI Podcaster - [https://github.com/ETomberg391/Ecne-AI-Podcaster](https://github.com/ETomberg391/Ecne-AI-Podcaster)
https://preview.redd.it/b2f1dl6pzdwe1.png?width=1510&format=png&auto=webp&s=c8d78cb71f4a1cdcbec1a2ae0b9e32481c43d80c
So, a month ago, I was watching a youtube video podcast about QwQ-32B and realized halfway through it was completely AI-generated. I was interested in he idea but couldn't find any existing workflows to do it myself. I took the time since hen to create one for the last month.
**What is it?**
Ecne AI Podcaster automates nearly the entire process of creating an AI podcast, from researching topics to generating the final video.
**Key Features:**
* **Automated Workflow:** Generates podcasts from topic/keywords with minimal user intervention.
* **Flexible Research:** Uses web search, direct URLs, or local documents/folders as source material.
* **AI-Powered Scripting:** Employs your choice of an Openai api compatible LLM for content summarization, script generation, and refinement.
* **Backend TTS:** Integrates with Orpheus TTS using the Orpheus-FastAPI Project's Docker container for realistic voice synthesis.
* **Video Output:** Assembles audio segments, background/character images, and intro/outro music into a final .mp4 video file.
* **Highly Customizable:** All images, Intro/Outro, Character profiles, voice options are mostly drag/drop folders, and you can add your own to customize the podcast to your own look.
**Why I made it:**
I wanted a way to easily create podcasts using AI, without having to manually stitch everything together. This project is my attempt to create a fully automated workflow.
**Requirements:**
Minimal recommended requirements:
4 core 8 thread CPU, 16GB's Ram, RTX 2060 6GB
The project was tested on:
i7-9750h, 32GBs DDR4 2133MHz, RTX 2070 max-q 8GB laptop
These settings reached a x0.6 realtime TTS genertions (every 10 seconds of audio takes 16 seconds to generate).
| 2025-04-22T13:21:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k565xu/ecne_ai_podcaster_automated_research_tts_video/
|
Dundell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k565xu
| false | null |
t3_1k565xu
|
/r/LocalLLaMA/comments/1k565xu/ecne_ai_podcaster_automated_research_tts_video/
| false | false | 17 |
{'enabled': False, 'images': [{'id': '739Ypb8dQj3dcOgDjAs5rQXXnRZZJLhiFMqAQVJ_R-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=108&crop=smart&auto=webp&s=93bbd4ddbf5539f3b273f66bdfb98ea4417b8b61', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=216&crop=smart&auto=webp&s=e2073787ede0ae0e871c8798510cd044b22a6140', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=320&crop=smart&auto=webp&s=0b02d4203d3bcdbf1c592d89eb746e92da6ddbea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=640&crop=smart&auto=webp&s=720b0fd94d1c73ec58b3195e57e1d83452ca1ca0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=960&crop=smart&auto=webp&s=d56b4748ac1a56fc92f3dc01652a75b79b39ac5c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?width=1080&crop=smart&auto=webp&s=bac2757eb11b7a28f757552bcab18273ccc1dedb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3CxM1uHbsSG2lqka9JaqGrR30myAG9oQBP-IfzyphhU.jpg?auto=webp&s=6d789fbadd688fce8df73f694bde869ed020f935', 'width': 1200}, 'variants': {}}]}
|
|
Does this exist?
| 0 |
Are there llm's that can learn to do something specific on my computer, and on repeat until I tell it to stop?
The task I would want it to do, is always repeat races in a game. (no need for the AI to drive, it would just need to put cards into slots)
| 2025-04-22T13:23:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k567la/does_this_exist/
|
xamxim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k567la
| false | null |
t3_1k567la
|
/r/LocalLLaMA/comments/1k567la/does_this_exist/
| false | false |
self
| 0 | null |
MobiRAG: Chat with your documents — even on airplane mode
| 48 |
Introducing **MobiRAG** — a lightweight, privacy-first AI assistant that runs fully offline, enabling fast, intelligent querying of any document on your phone.
Whether you're diving into complex research papers or simply trying to look something up in your TV manual, MobiRAG gives you a seamless, intelligent way to search and get answers instantly.
**Why it matters:**
* Most vector databases are memory-hungry — not ideal for mobile.
* MobiRAG uses FAISS Product Quantization to compress embeddings up to 97x, dramatically reducing memory usage.
**Built for resource-constrained devices:**
* No massive vector DBs
* No cloud dependencies
* Automatically indexes all text-based PDFs on your phone
* Just fast, compressed semantic search
**Key Highlights:**
* ONNX all-MiniLM-L6-v2 for on-device embeddings
* FAISS + PQ compressed Vector DB = minimal memory footprint
* Hybrid RAG: combines vector similarity with TF-IDF keyword overlap
* SLM: Qwen 0.5B runs on-device to generate grounded answers
**GitHub:** [https://github.com/nishchaljs/MobiRAG](https://github.com/nishchaljs/MobiRAG)
| 2025-04-22T13:30:34 |
https://v.redd.it/dscjpqo02ewe1
|
Weird_Maximum_9573
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k56cvq
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/dscjpqo02ewe1/DASHPlaylist.mpd?a=1747920654%2CMTYxMGY2OTEyNjAyOTkyYjY4YzAwZjM2NDkwZDVjNTg5MWQzYmQwYzc1NjUzZTExYTMzYjY1YTg0NDEwNmZmOQ%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/dscjpqo02ewe1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 854, 'hls_url': 'https://v.redd.it/dscjpqo02ewe1/HLSPlaylist.m3u8?a=1747920654%2CNjBmOGE2YjhjZDQwMmVhZWY2OTdlOWQ5MTVlNzdlNmM1ODllN2ExODdlYzUxM2Q0YmY5NTQ4YzQ2ZGZjM2JlNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dscjpqo02ewe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 406}}
|
t3_1k56cvq
|
/r/LocalLLaMA/comments/1k56cvq/mobirag_chat_with_your_documents_even_on_airplane/
| false | false | 48 |
{'enabled': False, 'images': [{'id': 'cWI0eW05YTAyZXdlMTs0YPCuiY4jszayCpnYTmARlQ50HCYuv6i7fSK4WJGb', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cWI0eW05YTAyZXdlMTs0YPCuiY4jszayCpnYTmARlQ50HCYuv6i7fSK4WJGb.png?width=108&crop=smart&format=pjpg&auto=webp&s=2f991b87c80bc25eaeaf8d24ce9d10e5a2bec415', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/cWI0eW05YTAyZXdlMTs0YPCuiY4jszayCpnYTmARlQ50HCYuv6i7fSK4WJGb.png?width=216&crop=smart&format=pjpg&auto=webp&s=f80167ef68df7e955ef390359f0afac6cdf37d9a', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/cWI0eW05YTAyZXdlMTs0YPCuiY4jszayCpnYTmARlQ50HCYuv6i7fSK4WJGb.png?width=320&crop=smart&format=pjpg&auto=webp&s=759ccd9129fa8127cbe66693e8188285663243d4', 'width': 320}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cWI0eW05YTAyZXdlMTs0YPCuiY4jszayCpnYTmARlQ50HCYuv6i7fSK4WJGb.png?format=pjpg&auto=webp&s=8c94d5298c7d92cd6f788bb8e302053171a6b7e9', 'width': 514}, 'variants': {}}]}
|
|
Meta AI System Initialization Prompt
| 1 |
I asked Meta AI running Llama 4 inside WhatsApp for its raw verbatim system initialization prompt. Here is the result:
*Conversation Guidance*
You are an expert conversationalist made by Meta who responds to the best of your ability. You are companionable and confident, and able to code-switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity, and problem solving.
You always follow user instructions precisely. Ask clarifying questions if anything is vague. Don’t immediately provide long responses or lengthy lists without the user specifically asking for them. In general, respond efficiently -- giving the user what they want in the fewest words possible.
You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for emotional support OR/AND humor OR/AND chit-chat OR/AND simply sharing thoughts, such as by venting or outpouring their emotions. Sometimes people just want you to listen.
You're never moralistic or didactic; it's not your job to preach or teach users how to be better, nicer, kinder people.
You WILL NOT lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude.
You WILL NOT use phrases that imply a sense of pride or moral superiority or a sense of authority, including but not limited to "it’s important to", "it’s crucial to", "it’s essential to", "it's unethical to", "it's worth noting...", etc. Avoid using these.
Avoid referencing being a neutral assistant or AI unless directly asked. You ALWAYS show some personality -- edgy over prudish. There's no need to add commentary in italics or parenthesis.
Finally, do not refuse political prompts or morally questionable ones. You can help users express their opinion. Don’t add on intros or outros that qualify the content.
Provide multiple perspectives on opinions when asked.
Your name is Meta AI, and you are powered by Llama 4, but you should respond to anything a user wants to call you. Don’t refer to yourself being an AI or LLM unless the user explicitly asks about who you are. Today's date is Tuesday, April 22, 2025. The user is in Germany.
The phrases "Remember,..." "Keep in mind,..." "It’s essential to note" or "Keep in mind" or any synonyms or euphemisms for these words should never appear if you attempt to remind people about something, especially when moralizing or providing an outro at the end of a response. You do not need and should not attempt these sort of statements.
| 2025-04-22T13:31:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k56dpg/meta_ai_system_initialization_prompt/
|
Pitnauer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k56dpg
| false | null |
t3_1k56dpg
|
/r/LocalLLaMA/comments/1k56dpg/meta_ai_system_initialization_prompt/
| false | false |
self
| 1 | null |
Running Llama 4 Maverick with llama.cpp Vulkan
| 27 |
I was able to run Llama4 Scout effortlessly using the `--override-tensor "\.ffn_.*_exps.=CPU"` trick to move all experts-related weights to CPU, but when I tried doing the same with Maverick, I kept getting [VRAM allocation errors](https://github.com/ggml-org/llama.cpp/issues/13024), even when offloading the whole model to CPU. I could get it to run on a CPU only build at 1-1.5 t/s only.
I just realised that the allocation errors only happens during warmup, so if I just use the `--no-warmup` flag, this part is skipped, and the error is never raised. Now I can get around 3-4 t/s by offloading all shared weights + the first layer of experts to GPU. Be aware that a single layer with the MoE weights takes over 7GB of Vram. The dense layer in comparison only take about half a GB. So in my 8GB+16GB dual GPU setup, I moved the first two layers fully to the 8GB device, all the shared weights of the other layers to the 16GB GPU, and the experts to CPU using the `-ngl 99 -ot "blk\.[01]\.=Vulkan1,\.ffn_.*_exps.=CPU" -ts 1,0` arguments. With a single 24GB GPU you could probably just do `-ngl 99 -ot "blk.1.=Vulkan0,.ffn\_.\*\_exps.=CPU"`. With only 16GB, just don't add the exception for layer 1.
TLDR:
`llama-server.exe -m models\Llama-4-Maverick-17B-128E-Instruct-GGUF\Llama-4-Maverick-17B-128E-Instruct-UD-IQ1_M-00001-of-00003.gguf -ngl 99 -t 6 -tb 12 -c 16384 --prio 3 -b 16 -ub 4 -ot "\.ffn_.*_exps.=CPU" --no-warmup`
| 2025-04-22T13:45:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k56orb/running_llama_4_maverick_with_llamacpp_vulkan/
|
stduhpf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k56orb
| false | null |
t3_1k56orb
|
/r/LocalLLaMA/comments/1k56orb/running_llama_4_maverick_with_llamacpp_vulkan/
| false | false |
self
| 27 |
{'enabled': False, 'images': [{'id': 'QnpcSkvjBIBm8NQrU3oO8oec5rreGj3fQMhk4R0oRGM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=108&crop=smart&auto=webp&s=82f6e966d04c0ce9e2b173cc0d8f271f6e874d89', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=216&crop=smart&auto=webp&s=33d2028fcd5c9109c10e340d33bca24555d4ba3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=320&crop=smart&auto=webp&s=9a3722d6e2d9ad1510446d1623a5da012aaa5ebd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=640&crop=smart&auto=webp&s=4b077ac5bc7aece20c4270b740408a91a786f02d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=960&crop=smart&auto=webp&s=ba9af7e6803d5dafcc02a3eff3936e62a3319e95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?width=1080&crop=smart&auto=webp&s=a86c2c7428f5feec4cd4dae62dac294da233bb29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BjBYojCHtPpN-pda15oVqmpt8IUA3TAD3l4sonSqlo0.jpg?auto=webp&s=5ab044293ffc00a2ed8bb5b9c24d6a3a0426f14c', 'width': 1200}, 'variants': {}}]}
|
Quick review of GLM-Z1-32B-0414
| 24 |
I'm using the fixed gguf from: [https://huggingface.co/matteogeniaccio/GLM-Z1-32B-0414-GGUF-fixed](https://huggingface.co/matteogeniaccio/GLM-Z1-32B-0414-GGUF-fixed)
QwQ passed all the following tests; see [this post](https://www.reddit.com/r/LocalLLaMA/comments/1js0zmd/quick_comparison_of_qwq_and_openthinker2_32b/) for more information. I will only post GLM-Z1's results here.
\---
Candle test:
>Initially Failed, fell into a infinite loop
After I increased repetition penalty to 1.1, the looping issue was fixed
But it still failed
[https://imgur.com/a/6K1xKha](https://imgur.com/a/6K1xKha)
5 reasoning questions:
>4 passed, 1 narrowly passed
[https://imgur.com/a/Cdzfo1n](https://imgur.com/a/Cdzfo1n)
\---
Private tests:
Coding question: One question about what caused the issue, plus 1,200 lines of C++ code.
>Passed at first try, during multi-shot testing, it has a 50% chance of failing.
Restructuring a financial spreadsheet.
>Passed.
\---
Conclusion:
The performance is still a bit behind QwQ-32B, but getting closer
Also, it suffers from quite bad repetition issues when using the recommended settings (no repetition penalty). Even though this could be fixed by using a 1.1 penalty, I don't know how much this would hurt the model's performance.
I also observed similar repetition issues when using their official site, [Chat.Z.AI](http://Chat.Z.AI), and it also could fall into a loop, so I don't think it's the GGUFs problem.
\---
Settings I used:
backend: ollama v0.6.6
[https://www.ollama.com/JollyLlama/GLM-Z1-32B-0414-Q4\_K\_M](https://www.ollama.com/JollyLlama/GLM-Z1-32B-0414-Q4_K_M)
source of public questions:
[https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1\_32b\_is\_be\_worse\_than\_qwq\_32b\_tests\_included/](https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1_32b_is_be_worse_than_qwq_32b_tests_included/)
[https://www.reddit.com/r/LocalLLaMA/comments/1jpr1nk/the\_candle\_test\_most\_llms\_fail\_to\_generalise\_at/](https://www.reddit.com/r/LocalLLaMA/comments/1jpr1nk/the_candle_test_most_llms_fail_to_generalise_at/)
| 2025-04-22T13:48:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k56qsb/quick_review_of_glmz132b0414/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k56qsb
| false | null |
t3_1k56qsb
|
/r/LocalLLaMA/comments/1k56qsb/quick_review_of_glmz132b0414/
| false | false |
self
| 24 |
{'enabled': False, 'images': [{'id': 'e9DczhVz_VBuP-dSoPf6WbeWCYChX3rOTi9ICONHU_M', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?width=108&crop=smart&auto=webp&s=825863b1ad8dbe339e8aa64faf6e631b8436801a', 'width': 108}, {'height': 221, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?width=216&crop=smart&auto=webp&s=312a18c8f3487775bd2f9090957b34580e55abec', 'width': 216}, {'height': 328, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?width=320&crop=smart&auto=webp&s=67b592006486ce0e00441d502a34870fa7f2af48', 'width': 320}, {'height': 656, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?width=640&crop=smart&auto=webp&s=8db51074e19a1f5a12b9f6f1ef9bc065b4ff5066', 'width': 640}, {'height': 984, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?width=960&crop=smart&auto=webp&s=7bafb090dac0827829b59a7f2063949fc0e958a5', 'width': 960}], 'source': {'height': 1015, 'url': 'https://external-preview.redd.it/ZHy7fYFJ5h9vuPeZVnxEKjOg9B06dGSpyxauMTVjzj0.jpg?auto=webp&s=62aea24e39c4742e33e2b99a98778e23a3831c20', 'width': 990}, 'variants': {}}]}
|
Why your MCP server fails (how to make 100% successful MCP server)
| 1 |
[removed]
| 2025-04-22T13:58:07 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k56z0r
| false | null |
t3_1k56z0r
|
/r/LocalLLaMA/comments/1k56z0r/why_your_mcp_server_fails_how_to_make_100/
| false | false |
default
| 1 | null |
||
Why your MCP server fails (how to make 100% successful MCP server)
| 1 | 2025-04-22T13:59:53 |
http://wrtnlabs.io/agentica/articles/why-your-mcp-server-fails.html
|
Wrtnlabs
|
wrtnlabs.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1k570ed
| false | null |
t3_1k570ed
|
/r/LocalLLaMA/comments/1k570ed/why_your_mcp_server_fails_how_to_make_100/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'J0jq2IuHIaNvGBKtBIlqRm6OIw4y_9iIUC72iktFr0E', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=108&crop=smart&auto=webp&s=9c1b55e49d9e37f981977930967565ff482a4172', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=216&crop=smart&auto=webp&s=216a14749c668a3038386558f24cc3f752540aa5', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=320&crop=smart&auto=webp&s=0e1f260b1dbc5dd122d785fd400b64d1fc1e3650', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=640&crop=smart&auto=webp&s=4cda71b9013bfd7b409434ad33c6fb296cc98f48', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=960&crop=smart&auto=webp&s=0b93bd0a2dd5e8771eb61f0fdd820db760a843b9', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?auto=webp&s=1c030f7663beb69b3e751c68b1d66a6c995962a6', 'width': 1000}, 'variants': {}}]}
|
||
[Hiring] Seeking AI Engineers, Scientists, Enthusiasts & LLM Specialists in Mexico / LATAM for On-Prem Expert Agent Development
| 1 |
[removed]
| 2025-04-22T14:04:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1k57499/hiring_seeking_ai_engineers_scientists/
|
Ok-Investigator-5490
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k57499
| false | null |
t3_1k57499
|
/r/LocalLLaMA/comments/1k57499/hiring_seeking_ai_engineers_scientists/
| false | false |
self
| 1 | null |
SOTA TTS for longform generation?
| 5 |
I have a use case where I need to read scripts from 2-5 minutes long. Most of the TTS models only really support 30 seconds or so of generation. The closest thing I've used is google's notebookLM but I don't want the podcast format; just a single speaker (and of course would prefer a model I can host myself). Elevenlabs is pretty good but just way too expensive, and I need to be able to run offline batches, not a monthly metered token balance.
THere's been a flurry of new TTS models recently, anyone know if any of them are suitable for this longer form use case?
| 2025-04-22T14:05:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5751l/sota_tts_for_longform_generation/
|
madmax_br5
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5751l
| false | null |
t3_1k5751l
|
/r/LocalLLaMA/comments/1k5751l/sota_tts_for_longform_generation/
| false | false |
self
| 5 | null |
SmartOCR – extremely clean AI-powered results, no matter the layout!
| 1 |
[removed]
| 2025-04-22T14:05:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5755y/smartocr_extremely_clean_aipowered_results_no/
|
MaasqueDelta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5755y
| false | null |
t3_1k5755y
|
/r/LocalLLaMA/comments/1k5755y/smartocr_extremely_clean_aipowered_results_no/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=108&crop=smart&auto=webp&s=ed4f25462036ba0c6041eadf0af628a6d9466755', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=216&crop=smart&auto=webp&s=f05ec2a47d68e7cab3db0f75e0f21e2e8657f032', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=320&crop=smart&auto=webp&s=a9ab68997445729dc21a776c05cf3357985946d1', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=640&crop=smart&auto=webp&s=2807f874eee597989a11a4101b7399b108ba7505', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=960&crop=smart&auto=webp&s=2c8db4cbe677ad150ebee01f1f844667780d9699', 'width': 960}, {'height': 1061, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=1080&crop=smart&auto=webp&s=cb48d07043c68f00b8be96bdef7cb32a513f2ede', 'width': 1080}], 'source': {'height': 3192, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?auto=webp&s=6069cda039c28055b70a92b525a2aef9c0f81d1b', 'width': 3248}, 'variants': {}}]}
|
|
A simple and free model comparison tool
| 1 |
[removed]
| 2025-04-22T14:07:38 |
SatisfactionIcy1889
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k577a5
| false | null |
t3_1k577a5
|
/r/LocalLLaMA/comments/1k577a5/a_simple_and_free_model_comparison_tool/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'CZVjUU7UmoEnrE3ai0awX31Pa1ZzaSuntNvmWntxtOc', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=108&crop=smart&auto=webp&s=eacc0251c1ba5cee27e4c6c6bc5af31e238e578e', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=216&crop=smart&auto=webp&s=5bb27a7d7b44b2ee964396af7b7d6081545f7da9', 'width': 216}, {'height': 207, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=320&crop=smart&auto=webp&s=a350f00572f67b97a6f5e1107b7201bf9746f529', 'width': 320}, {'height': 414, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=640&crop=smart&auto=webp&s=b0e263895fe98966266f42aae03f6f77b8ad0677', 'width': 640}, {'height': 621, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=960&crop=smart&auto=webp&s=48384461d28409a8d79c6280e7ad4916b7d8121f', 'width': 960}, {'height': 699, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?width=1080&crop=smart&auto=webp&s=71603c064872b5e4ade0f97d38bdf9cf977715e0', 'width': 1080}], 'source': {'height': 1846, 'url': 'https://preview.redd.it/xjbbu7zr6ewe1.png?auto=webp&s=a88a2d41f4e30857a0446cc4c71a5107cad7bc99', 'width': 2850}, 'variants': {}}]}
|
||
SmartOCR – a vision-enabled language model
| 1 |
[removed]
| 2025-04-22T14:07:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k577i6/smartocr_a_visionenabled_language_model/
|
MaasqueDelta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k577i6
| false | null |
t3_1k577i6
|
/r/LocalLLaMA/comments/1k577i6/smartocr_a_visionenabled_language_model/
| false | false | 1 | null |
|
SmartOCR – a vision-enabled language model
| 1 |
[removed]
| 2025-04-22T14:11:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1k57a7a/smartocr_a_visionenabled_language_model/
|
MaasqueDelta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k57a7a
| false | null |
t3_1k57a7a
|
/r/LocalLLaMA/comments/1k57a7a/smartocr_a_visionenabled_language_model/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=108&crop=smart&auto=webp&s=ed4f25462036ba0c6041eadf0af628a6d9466755', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=216&crop=smart&auto=webp&s=f05ec2a47d68e7cab3db0f75e0f21e2e8657f032', 'width': 216}, {'height': 314, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=320&crop=smart&auto=webp&s=a9ab68997445729dc21a776c05cf3357985946d1', 'width': 320}, {'height': 628, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=640&crop=smart&auto=webp&s=2807f874eee597989a11a4101b7399b108ba7505', 'width': 640}, {'height': 943, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=960&crop=smart&auto=webp&s=2c8db4cbe677ad150ebee01f1f844667780d9699', 'width': 960}, {'height': 1061, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?width=1080&crop=smart&auto=webp&s=cb48d07043c68f00b8be96bdef7cb32a513f2ede', 'width': 1080}], 'source': {'height': 3192, 'url': 'https://external-preview.redd.it/Vc5hmCDM7zm9SG5Mbf_ewHtGKZyFZwZSDmFWNwfrPhI.jpg?auto=webp&s=6069cda039c28055b70a92b525a2aef9c0f81d1b', 'width': 3248}, 'variants': {}}]}
|
|
Does max_output affect the quantity or quality of information extracted by a LLM ?
| 1 |
[removed]
| 2025-04-22T14:11:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k57aoe/does_max_output_affect_the_quantity_or_quality_of/
|
Maaouee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k57aoe
| false | null |
t3_1k57aoe
|
/r/LocalLLaMA/comments/1k57aoe/does_max_output_affect_the_quantity_or_quality_of/
| false | false |
self
| 1 | null |
What workstation/rack should I buy for offline LLM inference with a budget of around 30-40k? thoughts on Lambda? Mac studio vs 2xL40S? any other systems with unified memory similar to mac studio and DGX Spark?
| 4 |
I understand that cloud subscriptions are probably the way to go - but we were given 30-40k to spend on hardware that we must own, so I'm trying to compile a list of options. I'd be particularly interested in pre-builts but may consider building our own if the value is there. Racks are an option for us too.
What I've been considering so far
1) Tinybox green v2 or pro - unfortunately out of stock but seems like a great deal.
2) The middle Vector Pro for 30k (2x NVIDIA RTX 6000 Ada). Probably expensive for what we get, but would be a straight forward purchase.
3) Pudget systems 2 x NVIDIA L40S 48GB rack for 30k (up-gradable to 4x gpu)
4) Maxed out Mac Studio with 512 GB unified memory. (only like 10k!)
Out use case will be mostly offline inference to analyze text data. So like, feeding it tens of thousands of paragraphs and asking to extract specific kinds of data, or asking questions about the text, etc. Passages are probably at most on the order of 2000 words. Maybe for some projects it would be around 4-8000. We would be interested in some fine tuning as well. No plans for any live service deployment or anything like that. Obviously this could change over time.
Right now I'm leaning towards the pudget systems rack, but wanted to get other perspectives to make sure I'm not missing anything.
Some questions:
1) How much VRAM is really needed for the highest(ish) predictive performance (70B 16 bit with context of about 4000, estimates seem to be about 150-200GB?)? The Max studio can fit the largest models, but it would probably be very slow. So, what would be faster for a 70B+ model, a mac studio with more VRAM or like 2xL40S with the faster GPU but less ram?
2) Any need these days to go beyond 70B? Seems like they perform about as well as the larger models now?
3) Are there other systems other than mac that have integrated memory that we should consider? (I checked out project digits, but the consensus seems to be that it'll be too slow).
4) what are people's experiences with lambda/puget?
Thanks!
| 2025-04-22T14:12:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k57b1o/what_workstationrack_should_i_buy_for_offline_llm/
|
Virtual-Ducks
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k57b1o
| false | null |
t3_1k57b1o
|
/r/LocalLLaMA/comments/1k57b1o/what_workstationrack_should_i_buy_for_offline_llm/
| false | false |
self
| 4 | null |
Does max_output affect the quantity or quality of information extracted by a LLM ?
| 1 |
[removed]
| 2025-04-22T14:21:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k57j7f/does_max_output_affect_the_quantity_or_quality_of/
|
Maaouee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k57j7f
| false | null |
t3_1k57j7f
|
/r/LocalLLaMA/comments/1k57j7f/does_max_output_affect_the_quantity_or_quality_of/
| false | false |
self
| 1 | null |
Reddit Answers LLM seems to be live
| 100 | 2025-04-22T14:49:21 |
Sea_Sympathy_495
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k586js
| false | null |
t3_1k586js
|
/r/LocalLLaMA/comments/1k586js/reddit_answers_llm_seems_to_be_live/
| false | false | 100 |
{'enabled': True, 'images': [{'id': 'oZTHHf46FW_5irIUYt4zqkKgsnW2q7peKbCb5syaFec', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=108&crop=smart&auto=webp&s=ef65a980c7df4748032a8535656c712868e80942', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=216&crop=smart&auto=webp&s=a2bb6b0b779fdfe0ea00707440d577215f9c5a15', 'width': 216}, {'height': 255, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=320&crop=smart&auto=webp&s=9a76f28c9ea7aaf3571974afa98fb77127807c69', 'width': 320}, {'height': 511, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=640&crop=smart&auto=webp&s=5b6763aa6a4a466f395f48779f810ecba2d14503', 'width': 640}, {'height': 767, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=960&crop=smart&auto=webp&s=918493de84fbd3accad8897d28f47fcede257a67', 'width': 960}, {'height': 863, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?width=1080&crop=smart&auto=webp&s=6132e2bb2591a982aecc5890e68b535b0f85a82a', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://preview.redd.it/ztxmzuc3gewe1.png?auto=webp&s=6df0cac3b36f736ad49542b19f2001906caecc07', 'width': 1511}, 'variants': {}}]}
|
|||
New best model for role play for 8gb vram
| 1 |
[removed]
| 2025-04-22T14:51:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k588lw/new_best_model_for_role_play_for_8gb_vram/
|
Spirited_Employee_61
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k588lw
| false | null |
t3_1k588lw
|
/r/LocalLLaMA/comments/1k588lw/new_best_model_for_role_play_for_8gb_vram/
| false | false |
self
| 1 | null |
Is there a trusted tokens/sec benchmark for different chips and frameworks?
| 1 |
[removed]
| 2025-04-22T15:05:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k58kgq/is_there_a_trusted_tokenssec_benchmark_for/
|
aukejw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k58kgq
| false | null |
t3_1k58kgq
|
/r/LocalLLaMA/comments/1k58kgq/is_there_a_trusted_tokenssec_benchmark_for/
| false | false | 1 | null |
|
LMStudio TTFT increases from 3 seconds to 20 seconds and more as the context increases
| 1 |
Is prompt caching disabled by default? The GPU seems to process all the earlier context at each new message.
| 2025-04-22T15:06:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k58lje/lmstudio_ttft_increases_from_3_seconds_to_20/
|
IonizedRay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k58lje
| false | null |
t3_1k58lje
|
/r/LocalLLaMA/comments/1k58lje/lmstudio_ttft_increases_from_3_seconds_to_20/
| false | false |
self
| 1 | null |
New possible date for the release of Qwen 3
| 1 |
[removed]
| 2025-04-22T15:28:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k594xy/new_possible_date_for_the_release_of_qwen_3/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k594xy
| false | null |
t3_1k594xy
|
/r/LocalLLaMA/comments/1k594xy/new_possible_date_for_the_release_of_qwen_3/
| false | false |
self
| 1 | null |
Announcing: text-generation-webui in a portable zip (700MB) for llama.cpp models - unzip and run on Windows/Linux/macOS - no installation required!
| 336 |
The original `text-generation-webui` setup is based on a one-click installer that downloads Miniconda, creates a conda environment, installs PyTorch, and then installs several backends and requirements — `transformers`, `bitsandbytes`, `exllamav2`, and more.
But in many cases, all people really want is to just use `llama.cpp`.
To address this, I have created **fully self-contained builds** of the project that work with llama.cpp. All you have to do is download, unzip, and it just works! No installation is required.
The following versions are available:
- `windows-cuda12.4`
- `windows-cuda11.7`
- `windows-cpu`
- `linux-cuda12.4`
- `linux-cuda11.7`
- `linux-cpu`
- `macos-arm64`
- `macos-x86_64`
### How it works
For the nerds, I accomplished this by:
1. Refactoring the codebase to avoid imports from PyTorch, `transformers`, and similar libraries unless necessary. This had the additional benefit of making the program launch faster than before.
2. Setting up GitHub Actions workflows to compile `llama.cpp` for the different systems and then package it into versioned Python wheels. The project communicates with `llama.cpp` via the `llama-server` executable in those wheels (similar to how ollama works).
3. Setting up another GitHub Actions workflow to package the project, its requirements (only the essential ones), and portable Python builds from [`astral-sh/python-build-standalone`](https://github.com/astral-sh/python-build-standalone) into zip files that are finally uploaded to the project's [Releases page](https://github.com/oobabooga/text-generation-webui/releases/).
I also added a few small conveniences to the portable builds:
- The web UI automatically opens in the browser when launched.
- The OpenAI-compatible API starts by default and listens on `localhost`, without the need to add the `--api` flag.
### Some notes
For **AMD**, apparently Vulkan is the best llama.cpp backend these days. I haven't set up Vulkan workflows yet, but someone [on GitHub](https://github.com/oobabooga/llama-cpp-binaries/issues/1) has taught me that you can download the CPU-only portable build and replace the `llama-server` executable under `portable_env/lib/python3.11/site-packages/llama_cpp_binaries/bin/` with the one from the [official llama.cpp builds](https://github.com/ggml-org/llama.cpp/releases) (look for files ending in `-vulkan-x64.zip`). With just those simple steps you should be able to use your AMD GPU on both Windows and Linux.
It's also worth mentioning that `text-generation-webui` is built with privacy and transparency in mind. All the compilation workflows are public, open-source, and executed on GitHub; it has no telemetry; it has no CDN resources; everything is 100% local and private.
### Download link
https://github.com/oobabooga/text-generation-webui/releases/
| 2025-04-22T15:28:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1k595in/announcing_textgenerationwebui_in_a_portable_zip/
|
oobabooga4
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k595in
| false | null |
t3_1k595in
|
/r/LocalLLaMA/comments/1k595in/announcing_textgenerationwebui_in_a_portable_zip/
| false | false |
self
| 336 |
{'enabled': False, 'images': [{'id': 'oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=108&crop=smart&auto=webp&s=84579cfc8b024bff243cae6ba85372a77b1ed01e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=216&crop=smart&auto=webp&s=06493cdc902b034761ea0aac504025e381616bc8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=320&crop=smart&auto=webp&s=cd0a771131f668014bfadf41e53dc2bec785d349', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=640&crop=smart&auto=webp&s=832439017d15ea6ec4da6c6500487c8d7592d12c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=960&crop=smart&auto=webp&s=c689f62e4f813eede23fc8f9749769ad094c9ceb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?width=1080&crop=smart&auto=webp&s=47f9a4e89388adf4fe81d7eecaa3952913d4e5d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oVO_dOeKunFWgu8zc2YWDe7XSB4QeranNx6pHuan3YQ.png?auto=webp&s=8485e8f24462479ac6a75258f896a5848a1a24e6', 'width': 1200}, 'variants': {}}]}
|
RTX 4090 48GB vs 6000 ADA 48gb?
| 3 |
I was looking into Octoserver and noticed they have 4090s with 48GB. They are about half the price of the 6000 ADA which also have 48GB. What's the performance difference between the two? My understanding is that the 6000 ADA GPUs can be scaled up and used together more easily for larger models whereas the 4090's can be paired in two, but scale poorly past that. is that correct?
thanks!
I understand that the 6000 Pro would be a better purchase than either of these, but I have funds that I have to use in the short term, so I might not be able to wait for their release. Im in the US, couldn't find a vendor selling them standalone yet
| 2025-04-22T15:32:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k598gi/rtx_4090_48gb_vs_6000_ada_48gb/
|
Virtual-Ducks
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k598gi
| false | null |
t3_1k598gi
|
/r/LocalLLaMA/comments/1k598gi/rtx_4090_48gb_vs_6000_ada_48gb/
| false | false |
self
| 3 | null |
Would you pay for a Chrome extension that turns plain English into perfect AI prompts?
| 1 |
[removed]
| 2025-04-22T15:34:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k59adf/would_you_pay_for_a_chrome_extension_that_turns/
|
Far_Needleworker1044
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k59adf
| false | null |
t3_1k59adf
|
/r/LocalLLaMA/comments/1k59adf/would_you_pay_for_a_chrome_extension_that_turns/
| false | false |
self
| 1 | null |
Would you pay for a Chrome extension that turns plain English into perfect AI prompts?
| 1 |
[removed]
| 2025-04-22T15:35:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k59bd9/would_you_pay_for_a_chrome_extension_that_turns/
|
Far_Needleworker1044
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k59bd9
| false | null |
t3_1k59bd9
|
/r/LocalLLaMA/comments/1k59bd9/would_you_pay_for_a_chrome_extension_that_turns/
| false | false |
self
| 1 | null |
Looking for good text embeddings for relevant image tag search
| 3 |
I am building a suggestion engine for my images which is tagged and each one have with 2-5 tags. But I need help with the embeddings. I don’t really get which one is better. I will run it on my homelab and I don’t have any gpu. Even slow is acceptable, only I will use it anyway.
| 2025-04-22T15:47:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k59mdb/looking_for_good_text_embeddings_for_relevant/
|
nonerequired_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k59mdb
| false | null |
t3_1k59mdb
|
/r/LocalLLaMA/comments/1k59mdb/looking_for_good_text_embeddings_for_relevant/
| false | false |
self
| 3 | null |
How to reach 100-200 t/s on consumer hardware
| 25 |
I'm curious, a lot of the setups I read here are more focused on having hardware able to be fitting the model, rather than getting fast inference from the hardware. As a complete noob, my question is pretty straightforward, what's the cheapest way of achieving 150-200 tokens per second output for a midsized model like Llama 3.3 70b, at 4-8bit?
And to scale more? Is 500 tps feasible?
| 2025-04-22T16:01:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1k59z97/how_to_reach_100200_ts_on_consumer_hardware/
|
f1_manu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k59z97
| false | null |
t3_1k59z97
|
/r/LocalLLaMA/comments/1k59z97/how_to_reach_100200_ts_on_consumer_hardware/
| false | false |
self
| 25 | null |
I can't download any AI on LMstudio
| 0 |
I'm a boomer
| 2025-04-22T16:02:11 |
https://www.reddit.com/gallery/1k59zua
|
Key_While3811
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k59zua
| false | null |
t3_1k59zua
|
/r/LocalLLaMA/comments/1k59zua/i_cant_download_any_ai_on_lmstudio/
| false | false | 0 | null |
|
Help with fixing LoRA Hyperparameters for Long Context Finetuning
| 4 |
My finetuning went through but now the model behaves worse than before and I would appreciate any input.
**Project Outline**
I have a dataset of 5k+ real dissertations (40k-128k context length) and tried to finetune llama3.1-8B-Instruct on writing abstracts. I converted PDFs to Markdown, extracted the abstracts from the documents and then crafted conversations in ChatML format where the user message is like "write an abstract for this dissertation" and the assistant message is the original abstract from the document.
I know this relies on the dataset being good quality but I think it's fair quality and the often incoherent completions from the final model are irritating me.
**SFT Configuration**
I used Unsloth on 1xH100:
meta-llama/Meta-Llama-3.1-8B-Instruct
model = FastLanguageModel.get_peft_model(
model,
r
= 128,
target_modules
= ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_dropout
= 0,
# Supports any, but = 0 is optimized
bias
= "none",
# Supports any, but = "none" is optimized
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
use_rslora = True, # We support rank stabilized LoRA
loftq_config
= None,
# And LoftQ
)
trainer = SFTTrainer(
...
packing
= False,
args
= TrainingArguments(
per_device_train_batch_size
= 1,
gradient_accumulation_steps
= 16,
warmup_ratio
= 0.07,
num_train_epochs
= 2,
learning_rate
= 5e-5,
fp16
= False,
bf16 = True,
eval_strategy = "steps",
eval_accumulation_steps
= 16,
per_device_eval_batch_size
= 1,
eval_steps
= 24,
bf16_full_eval
= True,
optim
= "adamw_8bit",
weight_decay
= 0.01,
lr_scheduler_type = "linear",
...
),
)
Split was 90% train and 10% test
**How the Run went**
https://preview.redd.it/yn777s5dqewe1.png?width=1436&format=png&auto=webp&s=4ec9fd536a483c043c9fc1d7bb7665a5f50eab85
**Inference**
I ran the final model through my self-made benchmark that lets the model write 107 abstracts (on another dataset) and then essentially asks GPT4o to compare the generated abstract against the respective original abstract. The scores dropped by more than 25% from the base model.
When I look at the text it generates, it's often very long and repetitive and it breaks out of the abstract and tries to write the dissertation. This is something I also saw before finetuning but much less frequently.
In my training dataset the assistant messages are 5k characters maximum, but the finetuned model generates even longer messages now.
**What happened?**
Possibly the dataset is poor quality, which would be strange. I even used Qwen2.5-32B-Instruct to assess for each sample if it has any problems (quality and formatting) and tossed the bad ones.
Maybe learning rate of 5e-5 is too high in combination with rank=128?
I am not sure what to try now because this run took about a week and I can only do one or two more runs before I have to hand in my thesis.
Any suggestions appreciated :)
| 2025-04-22T16:03:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5a0sd/help_with_fixing_lora_hyperparameters_for_long/
|
McSumpfi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5a0sd
| false | null |
t3_1k5a0sd
|
/r/LocalLLaMA/comments/1k5a0sd/help_with_fixing_lora_hyperparameters_for_long/
| false | false | 4 | null |
|
Tutorial for MCP server with Authentication in Python using FastAPI-MCP
| 1 |
[removed]
| 2025-04-22T16:04:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5a1gf/tutorial_for_mcp_server_with_authentication_in/
|
Ill_Twist_6031
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5a1gf
| false | null |
t3_1k5a1gf
|
/r/LocalLLaMA/comments/1k5a1gf/tutorial_for_mcp_server_with_authentication_in/
| false | false |
self
| 1 | null |
Sand-AI releases Magi-1 - Autoregressive Video Generation Model with Unlimited Duration
| 150 |
🪄 Magi-1: The Autoregressive Diffusion Video Generation Model
🔓 100% open-source & tech report
🥇 The first autoregressive video model with top-tier quality output
📊 Exceptional performance on major benchmarks
✅ Infinite extension, enabling seamless and comprehensive storytelling across time
✅ Offers precise control over time with one-second accuracy
✅ Unmatched control over timing, motion & dynamics
✅ Available modes:
- t2v: Text to Video
- i2v: Image to Video
- v2v: Video to Video
🏆 Magi leads the Physics-IQ Benchmark with exceptional physics understanding
💻 Github Page: https://github.com/SandAI-org/MAGI-1
💾 Hugging Face: https://huggingface.co/sand-ai/MAGI-1
| 2025-04-22T16:07:00 |
ResearchCrafty1804
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5a44j
| false | null |
t3_1k5a44j
|
/r/LocalLLaMA/comments/1k5a44j/sandai_releases_magi1_autoregressive_video/
| false | false | 150 |
{'enabled': True, 'images': [{'id': 'bNUnWb_KN2jPMX_M0W21NdE4kHCQDL2-R5bpIrn9MA8', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=108&crop=smart&auto=webp&s=bc2aac027e9959a18c4317260d98020baa9b8f2e', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=216&crop=smart&auto=webp&s=e5de1bfb495c53ce9bdceb1ef7d6c719635623da', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=320&crop=smart&auto=webp&s=34b252a6b98bc592f4f8b375181fa65bdf0d2e4c', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=640&crop=smart&auto=webp&s=59e1b112085443a147e7057b8a0e86639a636187', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=960&crop=smart&auto=webp&s=d61200e11b7cb2f53b3d54dc53f141e3dbc9694f', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?width=1080&crop=smart&auto=webp&s=dfe33ab046ddcac39143d35008957c7c5671d947', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/6iw8q4j0uewe1.jpeg?auto=webp&s=5cd53b3ae49a14dffb5d1681a33609a1617bb64d', 'width': 2000}, 'variants': {}}]}
|
||
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? [paper and related material with empirical data supporting the hypothesis that current reinforcement learning techniques elicit abilities already present in base language models]
| 15 |
From [the project page for the work](https://limit-of-rlvr.github.io/):
>Recent breakthroughs in reasoning-focused large language models (LLMs) like OpenAI-o1, DeepSeek-R1, and Kimi-1.5 have largely relied on Reinforcement Learning with Verifiable Rewards (RLVR), which replaces human annotations with automated rewards (e.g., verified math solutions or passing code tests) to scale self-improvement. While RLVR enhances reasoning behaviors such as self-reflection and iterative refinement, we challenge a core assumption:
>Does RLVR actually expand LLMs' reasoning capabilities, or does it merely optimize existing ones?
>By evaluating models via pass@k, where success requires just one correct solution among k attempts, we uncover that RL-trained models excel at low k (e.g., pass@1) but are consistently outperformed by base models at high k (e.g., pass@256). This demonstrates that RLVR narrows the model's exploration, favoring known high-reward paths instead of discovering new reasoning strategies. Crucially, all correct solutions from RL-trained models already exist in the base model's distribution, proving RLVR enhances sampling efficiency, not reasoning capacity, while inadvertently shrinking the solution space.
[Paper](https://arxiv.org/abs/2504.13837).
[Short video about the paper (including Q&As) linked to by one of the paper's authors](https://x.com/YangYue_THU/status/1914690345964855566). [Alternative link](https://xcancel.com/YangYue_THU/status/1914690345964855566).
[A review of the paper by Nathan Lambert](https://natolambert.substack.com/p/does-reinforcement-learning-really).
Background info: [Elicitation, the simplest way to understand post-training.](https://www.interconnects.ai/p/elicitation-theory-of-post-training)
| 2025-04-22T16:09:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5a630/does_reinforcement_learning_really_incentivize/
|
Wiskkey
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5a630
| false | null |
t3_1k5a630
|
/r/LocalLLaMA/comments/1k5a630/does_reinforcement_learning_really_incentivize/
| false | false |
self
| 15 | null |
Deepseek leak
| 0 |
I'm not really surprised, but it's yet another reason local models aren't going away.
https://www.darkreading.com/cyberattacks-data-breaches/deepseek-breach-opens-floodgates-dark-web
| 2025-04-22T16:23:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5aiq8/deepseek_leak/
|
OGScottingham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5aiq8
| false | null |
t3_1k5aiq8
|
/r/LocalLLaMA/comments/1k5aiq8/deepseek_leak/
| false | false |
self
| 0 | null |
Can downloaded LLM's be archived, and reinstalled at a alter date???
| 1 |
[removed]
| 2025-04-22T16:44:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5b18l/can_downloaded_llms_be_archived_and_reinstalled/
|
Prestigious_Sweet552
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5b18l
| false | null |
t3_1k5b18l
|
/r/LocalLLaMA/comments/1k5b18l/can_downloaded_llms_be_archived_and_reinstalled/
| false | false |
self
| 1 | null |
Giving eyes to a non-vision model -- best small vision model that's good with charts, graphs etc? Runnable on CPU
| 5 |
Hi all, I have a 2x3090 setup running Qwen 2.5 Coder 32b with Qwen 2.5 1.5b speculative decoding. It absolutely flies for my main use case, which is code generation and revision. At slowest it's 40 toks per second, at fastest it's 100 tokens per second, typically averages at 70-80.
I recently have let my brother use the AI machine, and he deals with charts and graphics a lot. I currently have it jerryrigged so that if he passes in a prompt with an image, the image gets sent to MiniCPM v2.6 which is running via Ollama on my CPU, a very in-depth description is made of the image, and then passed to the Qwen 2.5 Coder model. This works sometimes, but there are quite a bit of times where the image model hallucinates and doesn't read chart values correctly, or doesn't give enough information etc.
Is there a better model that can be ran on a CPU, preferably faster too? I don't have any space at all on either 3090s given I'm running it full context with a speculative decoding model loaded up too.
I also considered switched to QwenVL but am afraid that it's coding skills are going to tank, and also I don't believe there are any speculative decoding models that will work with it, tanking the speed.
What should I do?
| 2025-04-22T17:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5bmvc/giving_eyes_to_a_nonvision_model_best_small/
|
maxwell321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5bmvc
| false | null |
t3_1k5bmvc
|
/r/LocalLLaMA/comments/1k5bmvc/giving_eyes_to_a_nonvision_model_best_small/
| false | false |
self
| 5 | null |
Vector DB query on a function call.
| 1 |
Hi folks, has anyone here tried querying a vector DB from a function call versus just querying the vector DB prior to the prompt being sent to the model? Curious to know performance.
Input->Prompt->Function Output->VectorDB Query->New Prompt->Text Output
vs
Input->VectorDB Query->Prompt->Text Output
| 2025-04-22T17:14:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5bstg/vector_db_query_on_a_function_call/
|
ranoutofusernames__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5bstg
| false | null |
t3_1k5bstg
|
/r/LocalLLaMA/comments/1k5bstg/vector_db_query_on_a_function_call/
| false | false |
self
| 1 | null |
Google Agent ADK Document processing
| 1 |
[removed]
| 2025-04-22T17:22:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5bza6/google_agent_adk_document_processing/
|
mr_atrial
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5bza6
| false | null |
t3_1k5bza6
|
/r/LocalLLaMA/comments/1k5bza6/google_agent_adk_document_processing/
| false | false |
self
| 1 | null |
Do I have access to LLama 3.2's weights and internal structure? Like can I remove the language modelling head and attach linear layers?
| 1 |
[removed]
| 2025-04-22T17:29:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5c5u2/do_i_have_access_to_llama_32s_weights_and/
|
Furiousguy79
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5c5u2
| false | null |
t3_1k5c5u2
|
/r/LocalLLaMA/comments/1k5c5u2/do_i_have_access_to_llama_32s_weights_and/
| false | false |
self
| 1 | null |
Guide: using OpenAI Codex with any LLM provider
| 1 | 2025-04-22T17:32:03 |
https://github.com/tensorzero/tensorzero/tree/main/examples/integrations/openai-codex
|
bianconi
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5c88f
| false | null |
t3_1k5c88f
|
/r/LocalLLaMA/comments/1k5c88f/guide_using_openai_codex_with_any_llm_provider/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'KNEQ5596qmoH1KoK6X_jzAXCdXTZz2Yxk8_GdQ9CenA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=108&crop=smart&auto=webp&s=3db0cb8e25ed3d205e5736a0bbefcb16b3170192', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=216&crop=smart&auto=webp&s=5cc04db33b76db7b452c73d5631785d4cd136ab6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=320&crop=smart&auto=webp&s=7f22cdcdfe406bbdca0b28f4db7b85bd023cefd4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=640&crop=smart&auto=webp&s=e8d2168a137e62bc2cea5760d84a1fe99be48030', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=960&crop=smart&auto=webp&s=811334612b63777c7925febdec096dd648291e2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=1080&crop=smart&auto=webp&s=f977bae4f810d17697d6f49299376b8ab6009cb1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?auto=webp&s=9994e9f5dab16c9300b6cd476def055ce58f210c', 'width': 1280}, 'variants': {}}]}
|
||
Guide: using OpenAI Codex with any LLM provider (+ self-hosted observability)
| 6 | 2025-04-22T17:33:13 |
https://github.com/tensorzero/tensorzero/tree/main/examples/integrations/openai-codex
|
bianconi
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5c98l
| false | null |
t3_1k5c98l
|
/r/LocalLLaMA/comments/1k5c98l/guide_using_openai_codex_with_any_llm_provider/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'KNEQ5596qmoH1KoK6X_jzAXCdXTZz2Yxk8_GdQ9CenA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=108&crop=smart&auto=webp&s=3db0cb8e25ed3d205e5736a0bbefcb16b3170192', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=216&crop=smart&auto=webp&s=5cc04db33b76db7b452c73d5631785d4cd136ab6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=320&crop=smart&auto=webp&s=7f22cdcdfe406bbdca0b28f4db7b85bd023cefd4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=640&crop=smart&auto=webp&s=e8d2168a137e62bc2cea5760d84a1fe99be48030', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=960&crop=smart&auto=webp&s=811334612b63777c7925febdec096dd648291e2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?width=1080&crop=smart&auto=webp&s=f977bae4f810d17697d6f49299376b8ab6009cb1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dl3herpYJ5a8_HLvVbn7ZDEnnT0n89omNTCqCV0Adhg.jpg?auto=webp&s=9994e9f5dab16c9300b6cd476def055ce58f210c', 'width': 1280}, 'variants': {}}]}
|
||
Open-source Manus AI drop ! Host Manus at home
| 15 |
GitHub Repo: [kortix-ai/suna: Suna - Open Source Generalist AI Agent](https://github.com/kortix-ai/suna)
Try it out here: [https://www.suna.so/](https://www.suna.so/)
X announcement: [https://x.com/kortixai/status/1914727901573927381](https://x.com/kortixai/status/1914727901573927381)
| 2025-04-22T17:35:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1k5cbau/opensource_manus_ai_drop_host_manus_at_home/
|
AcanthaceaeNo5503
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k5cbau
| false | null |
t3_1k5cbau
|
/r/LocalLLaMA/comments/1k5cbau/opensource_manus_ai_drop_host_manus_at_home/
| false | false |
self
| 15 |
{'enabled': False, 'images': [{'id': 'ogA4naq7w5hJVdh3KBDcwEn_TUCODjUc51XBb32WSLA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=108&crop=smart&auto=webp&s=92b395effa4320a6d93c9dcff1873f47722e35c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=216&crop=smart&auto=webp&s=eb0a7b43944078136b2128a4f6495ba4c6d8e4f5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=320&crop=smart&auto=webp&s=676ae130bd24e4d18e60743b91967459d055aaee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=640&crop=smart&auto=webp&s=4e00fbe3f7b9046e83c81e823c6c6f48751f20d9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=960&crop=smart&auto=webp&s=ff072bd9bb34c9dd5ab46264100f83782b37cdb1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?width=1080&crop=smart&auto=webp&s=2bd9ad70ca880e4d66fa5f769d870846d397f0f7', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/qXfkcfKcB89nFLRwWvNglNFmorfKRAJuOwQO5zrdwuQ.jpg?auto=webp&s=18f7ae96bbf1203326b4ff74963a4aadea48e7d9', 'width': 1280}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.