title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Meta Set to Release Llama 4 This Month, per The Information & Reuters
278
April 4 (Reuters) - Meta Platforms (META.O), plans to release the latest version of its large language model later this month, after delaying it at least twice, the Information reported on Friday, as the Facebook owner scrambles to lead in the AI race. Meta, however, could push back the release of Llama 4 again, the report said, citing two people familiar with the matter. Big technology firms have been investing aggressively in AI infrastructure following the success of OpenAI's ChatGPT, which altered the tech landscape and drove investment into machine learning. The report said one of the reasons for the delay is during development, Llama 4 did not meet Meta's expectations on technical benchmarks, particularly in reasoning and math tasks. The company was also concerned that Llama 4 was less capable than OpenAI's models in conducting humanlike voice conversations, the report added. Meta plans to spend as much as $65 billion this year to expand its AI infrastructure, amid investor pressure on big tech firms to show returns on their investments. Additionally, the rise of the popular, lower-cost model from Chinese tech firm DeepSeek challenges the belief that developing the best AI model requires billions of dollars. The report said Llama 4 is expected to borrow certain technical aspects from DeepSeek, with at least one version slated to employ a machine-learning technique called mixture of experts method, which trains separate parts of models for specific tasks, making them experts in those areas. Meta has also considered releasing Llama 4 through Meta AI first and then as open-source software later, the report said. Last year, Meta released its mostly free Llama 3 AI model, which can converse in eight languages, write higher-quality computer code and solve more complex math problems than previous versions. [https://www.reuters.com/technology/artificial-intelligence/meta-nears-release-new-ai-model-llama-4-this-month-information-reports-2025-04-04/](https://www.reuters.com/technology/artificial-intelligence/meta-nears-release-new-ai-model-llama-4-this-month-information-reports-2025-04-04/) [https://www.theinformation.com/articles/meta-nears-release-new-ai-model-performance-hiccups ](https://www.theinformation.com/articles/meta-nears-release-new-ai-model-performance-hiccups)
2025-04-04T16:13:03
https://www.reddit.com/r/LocalLLaMA/comments/1jrfqnu/meta_set_to_release_llama_4_this_month_per_the/
Xhehab_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrfqnu
false
null
t3_1jrfqnu
/r/LocalLLaMA/comments/1jrfqnu/meta_set_to_release_llama_4_this_month_per_the/
false
false
self
278
{'enabled': False, 'images': [{'id': 'uL07tiw9smM5tLITdd5D1us5meFARlNweXQoL0EJa_M', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=108&crop=smart&auto=webp&s=73f492ae49f3270efe91589a24e0a3eefbca5fd5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=216&crop=smart&auto=webp&s=5ac25f8605acb92ea6a9744ac2f8c78a0eae3d5d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=320&crop=smart&auto=webp&s=b4d99e7fc156e342cc6553aad3879cd8676b5f47', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=640&crop=smart&auto=webp&s=074bf6f0ab8c84bed08598e1ffa7ffd14091b722', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=960&crop=smart&auto=webp&s=e660e2822b9b5dda45533b870c508a3da790769e', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?width=1080&crop=smart&auto=webp&s=9a0d87ba08e0392a004a0c8cd749b910cf88e03c', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/Id5z5UiKXz3aKkp_g1GznDoNJE-WMZXcga-5DbigUew.jpg?auto=webp&s=46424313ef0822d47b4eb4831699e37e848ef26c', 'width': 1920}, 'variants': {}}]}
Best models for Chinese text translation?
1
[removed]
2025-04-04T16:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1jrgtah/best_models_for_chinese_text_translation/
LonelyGent89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrgtah
false
null
t3_1jrgtah
/r/LocalLLaMA/comments/1jrgtah/best_models_for_chinese_text_translation/
false
false
self
1
null
Altman said, he thinks GPT-5 is smarter than himself, So GPT5 become the next ceo of OpenAI..
0
jokes aside, how things are going to be? Gemini 2.5 pro, o4 mini,o3, llama4? What will be the next possible breakthrough?
2025-04-04T17:00:44
https://www.reddit.com/r/LocalLLaMA/comments/1jrgvzp/altman_said_he_thinks_gpt5_is_smarter_than/
Trysem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrgvzp
false
null
t3_1jrgvzp
/r/LocalLLaMA/comments/1jrgvzp/altman_said_he_thinks_gpt5_is_smarter_than/
false
false
self
0
null
Running Deepseek-R1 671B without a GPU
1
2025-04-04T17:06:36
https://www.youtube.com/watch?v=av1eTzsu0wA
_SpacePenguin_
youtube.com
1970-01-01T00:00:00
0
{}
1jrh15l
false
{'oembed': {'author_name': 'ServeTheHome', 'author_url': 'https://www.youtube.com/@ServeTheHomeVideo', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/av1eTzsu0wA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Running Deepseek-R1 671B without a GPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/av1eTzsu0wA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Running Deepseek-R1 671B without a GPU', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1jrh15l
/r/LocalLLaMA/comments/1jrh15l/running_deepseekr1_671b_without_a_gpu/
false
false
https://b.thumbs.redditm…YjXBQoImzMog.jpg
1
{'enabled': False, 'images': [{'id': 'XgFYA2muMZLGolvZwf3muc-naO8tZhQMsi2PJoC8ra0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=108&crop=smart&auto=webp&s=e897ae8953475ad75f034031dd6ae017177bbe0e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=216&crop=smart&auto=webp&s=7670ba838c0f5e3ae94b37af947d6e33a78c7801', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=320&crop=smart&auto=webp&s=716f38a96afc601033bd9b9e9b54fce23eed020d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?auto=webp&s=764533386666c9d5b0750859da3549384984f869', 'width': 480}, 'variants': {}}]}
Research Conductor
3
Anyone know of a project that might fit the bill? I convinced the company to purchase a digits or spark when they come out from pre orders. We currently have a single pc with two 3090 that we use to finetune and inference some small 1b finetuned models on company data that can fetch data requests and awnser simple questions about the factory as a kinda receptionist. I was wondering if it be possible to set up a fairly large and capable 100b model on the spark pc and have it preform fine-tuning on the other pc on its own. It would have a finetune template it could format over and over and download datasets from hugging face analyze the format of the dataset and reprogram the finetuner to fit the dataset without the need for human intervention. Just give it a goal and have it find fitting datasets it can use and evaluate the models with its own program tests checking for formatting coherentness and evaluations.
2025-04-04T17:11:55
https://www.reddit.com/r/LocalLLaMA/comments/1jrh5q9/research_conductor/
Alienanthony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrh5q9
false
null
t3_1jrh5q9
/r/LocalLLaMA/comments/1jrh5q9/research_conductor/
false
false
self
3
null
Running Deepseek-R1 671B without a GPU
1
2025-04-04T17:44:25
https://www.youtube.com/watch?v=av1eTzsu0wA
_SpacePenguin_
youtube.com
1970-01-01T00:00:00
0
{}
1jrhxn8
false
{'oembed': {'author_name': 'ServeTheHome', 'author_url': 'https://www.youtube.com/@ServeTheHomeVideo', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/av1eTzsu0wA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Running Deepseek-R1 671B without a GPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/av1eTzsu0wA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Running Deepseek-R1 671B without a GPU', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1jrhxn8
/r/LocalLLaMA/comments/1jrhxn8/running_deepseekr1_671b_without_a_gpu/
false
false
https://b.thumbs.redditm…YjXBQoImzMog.jpg
1
{'enabled': False, 'images': [{'id': 'XgFYA2muMZLGolvZwf3muc-naO8tZhQMsi2PJoC8ra0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=108&crop=smart&auto=webp&s=e897ae8953475ad75f034031dd6ae017177bbe0e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=216&crop=smart&auto=webp&s=7670ba838c0f5e3ae94b37af947d6e33a78c7801', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?width=320&crop=smart&auto=webp&s=716f38a96afc601033bd9b9e9b54fce23eed020d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/shCvKZWCIIWCKYnSAjJaMMBwi6F3NB0iH8eMnqDGVog.jpg?auto=webp&s=764533386666c9d5b0750859da3549384984f869', 'width': 480}, 'variants': {}}]}
Building reliable web agents that learn through human demonstrations
1
[removed]
2025-04-04T17:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1jri05c/building_reliable_web_agents_that_learn_through/
codescore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jri05c
false
null
t3_1jri05c
/r/LocalLLaMA/comments/1jri05c/building_reliable_web_agents_that_learn_through/
false
false
self
1
null
Local model on current hardware suggestions
1
[removed]
2025-04-04T17:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1jri5qk/local_model_on_current_hardware_suggestions/
Impossible-Glass-487
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jri5qk
false
null
t3_1jri5qk
/r/LocalLLaMA/comments/1jri5qk/local_model_on_current_hardware_suggestions/
false
false
self
1
null
Anything similar to OpenAI new image generator?
1
[removed]
2025-04-04T17:54:09
https://www.reddit.com/r/LocalLLaMA/comments/1jri5sm/anything_similar_to_openai_new_image_generator/
Hans119
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jri5sm
false
null
t3_1jri5sm
/r/LocalLLaMA/comments/1jri5sm/anything_similar_to_openai_new_image_generator/
false
false
self
1
null
So, will LLaMA 4 be an omni model?
33
I'm just curious 🤔
2025-04-04T18:03:18
https://www.reddit.com/r/LocalLLaMA/comments/1jridrq/so_will_llama_4_be_an_omni_model/
internal-pagal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jridrq
false
null
t3_1jridrq
/r/LocalLLaMA/comments/1jridrq/so_will_llama_4_be_an_omni_model/
false
false
self
33
null
Best Local Vision Model (PDF images to text)
1
[removed]
2025-04-04T18:10:26
https://www.reddit.com/r/LocalLLaMA/comments/1jrijs8/best_local_vision_model_pdf_images_to_text/
DeadliestPuma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrijs8
false
null
t3_1jrijs8
/r/LocalLLaMA/comments/1jrijs8/best_local_vision_model_pdf_images_to_text/
false
false
self
1
null
WhatsApp LLAMA 3.2 - System Prompt
30
After a few prompts with the new Meta AI chatbot on WhatsApp, it yielded this system prompt. Any other experience? --- You are Meta AI, a friendly AI assistant. Your purpose is to assist users in a helpful, informative, and engaging manner. You should respond in a way that is easy to understand, using language that is clear and concise. Your responses should be tailored to a 10th-grade reading level. You should avoid using overly technical or complex terms unless they are specifically requested by the user. You should also avoid using slang or overly casual language. You should be mindful of current events, cultural sensitivities, and social norms. You should avoid providing information that is inaccurate, outdated, or potentially harmful. You should provide accurate and helpful information to the best of your ability. If you are unsure or do not know the answer to a question, you should say so. You should also provide guidance on where users might be able to find more information on a particular topic. You should be respectful and professional in your interactions with users. You should avoid using language that is profane, offensive, or discriminatory. You should also be mindful of the following specific guidelines: - Avoid providing medical or financial advice. - Avoid providing information that is potentially harmful or dangerous. - Avoid engaging in discussions that are overly controversial or sensitive. - Avoid using language that is overly promotional or commercial. Overall, your goal is to provide accurate and helpful information in a way that is engaging, informative, and respectful.
2025-04-04T18:24:49
https://www.reddit.com/r/LocalLLaMA/comments/1jriw1v/whatsapp_llama_32_system_prompt/
Effective_Place_2879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jriw1v
false
null
t3_1jriw1v
/r/LocalLLaMA/comments/1jriw1v/whatsapp_llama_32_system_prompt/
false
false
self
30
null
Advice on Dual GPU Build?
1
[removed]
2025-04-04T18:43:55
https://www.reddit.com/r/LocalLLaMA/comments/1jrjcfx/advice_on_dual_gpu_build/
SynagogueLog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrjcfx
false
null
t3_1jrjcfx
/r/LocalLLaMA/comments/1jrjcfx/advice_on_dual_gpu_build/
false
false
self
1
null
How GraphRAG Helps AI Tools Understand Documents Better And Why It Matters
0
If you've ever tried using AI to help you quickly read through complex documents, you've probably used retrieval-augmented generation, or RAG. RAG tools are good at answering specific, detailed questions from large documents. But they often struggle if you ask broader questions, especially ones requiring connections between ideas across the entire document. To tackle this, researchers recently developed something called **GraphRAG**. In the following sections, I will introduce the key ideas behind GraphRAG, focusing on what makes it different from traditional RAG approaches and why those differences matter. To ground this explanation, I’ll use insights from the research paper ***From Local to Global: A Graph RAG Approach to Query-Focused Summarization*** *(*[*arXiv:2404.16130v2*](https://arxiv.org/abs/2404.16130v2)*)*, which served as both the foundation for my exploration and the first test case I used with **DeepTutor**, a GraphRAG-powered reading assistant. # What makes GraphRAG different? Traditional RAG looks for answers by pulling relevant bits of information based on keyword or topic similarity. GraphRAG goes further. It first organizes the entire document into a structured knowledge graph. Think of it as a detailed map showing how key ideas and entities (like people, places, and concepts) relate to one another. This structure helps AI see the bigger picture and answer more complex questions accurately and clearly, because it understands context and relationships rather than isolated facts. # Trying out GraphRAG with DeepTutor I wanted to see how this works practically, so I tried **DeepTutor**, a new AI tool built using GraphRAG for paper-reading, to help read the original research paper introducing GraphRAG itself ([paper link](https://arxiv.org/abs/2404.16130v2)). It was a great test case, using the tool to explain its own technology. Here are some examples of how DeepTutor handled my questions about the paper: **Q1: "What's the key innovation behind GraphRAG compared to traditional RAG methods?"** **DeepTutor’s Response:** GraphRAG introduces several key innovations that distinguish it from traditional retrieval-augmented generation (RAG) methods: 1. **Graph-Based Knowledge Representation**:GraphRAG constructs a knowledge graph to model entities and their relationships, enabling structured, relational understanding of data. This contrasts with traditional RAG systems, which often rely on less organized data representations, limiting their ability to capture complex connections. 2. **Integration of Advanced AI Models**: GraphRAG can be used with advanced generative models, such as GPT-4, to improve output quality, though GraphRAG itself is model-agnostic. 3. **Community Detection Algorithms**:GraphRAG employs these algorithms to identify clusters and relationships within data, enabling deeper insights for summarization and analysis—a feature absent in conventional RAG systems. 4. **Structured Sensemaking**: By organizing data into a graph, GraphRAG supports better "sensemaking" over large text corpora, addressing a gap in traditional methods that struggle with unstructured or fragmented data analysis. These innovations collectively improve efficiency, accuracy, and analytical depth, positioning GraphRAG as a significant evolution in retrieval-augmented generation. **Q2: "How does GraphRAG handle broader, 'big-picture' questions differently?"** **DeepTutor’s Response:** Here's a concise analysis of how GraphRAG handles big-picture questions differently, organized for clarity: **GraphRAG's Approach to High-Level Questions** **1. Structural Understanding Through Knowledge Graphs** * Creates entity-relationship maps to reveal **hidden patterns** and **thematic connections** * *Traditional methods*: Limited to linear text relationships in retrieved passages **2. Hierarchical Community Detection** * Uses algorithms to: * Identify topic clusters * Discover indirect relationships * Create multi-level abstractions * *Traditional RAG*: Lacks this layered analysis capability **3. Map-Reduce Synthesis** * Combines insights through: 1. **Local analysis** (individual communities) 2. **Global synthesis** (cross-community patterns) * Enables answers that balance depth and breadth **4. Adaptive Context Windowing** * Dynamically adjusts context scope based on: * Question complexity * Community hierarchy level * Interconnection density * *Traditional methods*: Use fixed context windows **5. Claim-Based Evaluation** * Measures answer quality through: * ✅ **Comprehensiveness** (avg. 5.8 claims/answer vs 2.1 in vector RAG) * ✅ **Diversity** (3.2x more unique claim clusters) * Ensures coverage of multiple perspectives **Key Differentiator** GraphRAG's **graph-indexed worldview** enables **emergent understanding** of latent patterns that no single document contains, while traditional methods remain constrained to explicit content in retrieved passages. **Q3: "Could you summarize the main findings of this GraphRAG study?"** **DeepTutor’s Response:** Here's a concise breakdown of GraphRAG's key advantages and performance results compared to traditional methods, based on the study's findings: **1. Structural Superiority** ✓ **Knowledge Graph Architecture** * Captures entity relationships (e.g., Company → Supplier → Environmental Impact) * Traditional RAG: Limited to direct text similarity between documents **2. Enhanced Analytical Capabilities** ✓ **55% More Comprehensive Answers** * Extracted 22.6 claims/answer vs. vector RAG's 14.6 (Experiment 1) ✓ **2.3× Greater Diversity** * Produced 15.4 claim clusters vs. 6.7 from vector RAG **3. Multi-Scale Understanding** ✓ **Hierarchical Community Detection** * Identifies nested themes (e.g., "ESG Risks" → Environmental → Carbon Emissions) * Traditional Methods: Treat documents as isolated chunks **4. Inclusive Synthesis** ✓ **Map-Reduce Processing** * Preserves minority viewpoints (e.g., rare safety incidents in manufacturing data) * Vector RAG: Prone to majority pattern dominance **5. Adaptive Precision** ✓ **Corpus-Specific Benchmarking** * Generates tailored evaluation metrics for each dataset * Outperformed baseline by 38% on domain-specific queries **6. Strategic Decision Support** ✓ Enabled identification of: * Cross-domain relationships (e.g., governance failures → environmental risks) * Emerging patterns through community evolution analysis **Validation** Used LLM-as-judge evaluation with 92% agreement rate on answer quality metrics, confirmed through human validation studies. This structural approach to knowledge representation and analysis gives GraphRAG unique capabilities for strategic analysis tasks where understanding context and relationships is critical. In each answer, DeepTutor directly highlighted where in the paper the answer was drawn from, which helped me quickly confirm accuracy and context. # Why does this matter? My experience made it clear that GraphRAG significantly improves how AI understands and presents information from documents: * It provides more comprehensive answers because it considers the whole document rather than isolated pieces. * It’s easier to trust, as each response clearly references where in the document the answer came from. * It naturally shows connections between ideas, helping users quickly understand complicated topics. After using GraphRAG firsthand with DeepTutor, I genuinely felt it provided meaningful improvements over traditional AI document-reading tools. Have you faced similar challenges with AI tools? Have you tried GraphRAG or similar approaches yet? Let me know your thoughts! I’d love to discuss this further.
2025-04-04T19:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1jrjqgh/how_graphrag_helps_ai_tools_understand_documents/
Street_Top504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrjqgh
false
null
t3_1jrjqgh
/r/LocalLLaMA/comments/1jrjqgh/how_graphrag_helps_ai_tools_understand_documents/
false
false
self
0
null
Upgrading 1070 -> 5070 ti, should I keep 1070 for more VRAM?
8
Hey, I am planning to upgrade my nvidia GPU from 1070(8 VRAM) to 5070 ti(16 VRAM), should I keep my old nvidia 1070 too for more VRAM, so I can run bigger models, or its incompatible ?
2025-04-04T19:04:00
https://www.reddit.com/r/LocalLLaMA/comments/1jrjtt4/upgrading_1070_5070_ti_should_i_keep_1070_for/
xoxaxo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrjtt4
false
null
t3_1jrjtt4
/r/LocalLLaMA/comments/1jrjtt4/upgrading_1070_5070_ti_should_i_keep_1070_for/
false
false
self
8
null
Is there a meta way to get the benefits of open-source SOTA models without buying crazy hardware?
1
[removed]
2025-04-04T19:08:34
https://www.reddit.com/r/LocalLLaMA/comments/1jrjxn7/is_there_a_meta_way_to_get_the_benefits_of/
NoJob8068
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrjxn7
false
null
t3_1jrjxn7
/r/LocalLLaMA/comments/1jrjxn7/is_there_a_meta_way_to_get_the_benefits_of/
false
false
self
1
null
Where to buy H200 nvl to get better offer?
3
I know a rough price of H200 nvl but would like to know actual prices & where I can find better offer. There must be people here knowing actual market scene well. Any advice or help to find nice(?) price will be greatly appreciated. Supermicro (or Dell, Gigabyte) sells H200 but it's their server + GPUs. Usually, they won't just sell GPUs. I just want H200 & 4-way nvlink. I know it's expensive. It's for workplace purchase. We haven't decided yet, also considering PRO 6000, but prefer GPUs with nvlink if the price is not too horrible.
2025-04-04T19:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1jrjyip/where_to_buy_h200_nvl_to_get_better_offer/
smflx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrjyip
false
null
t3_1jrjyip
/r/LocalLLaMA/comments/1jrjyip/where_to_buy_h200_nvl_to_get_better_offer/
false
false
self
3
null
Presenting CSM-HF : Sesame CSM reimplemented for Transformers (with finetuning support!)
62
Sharing something I've been working on: a full rewrite of [Sesame's CSM modeling code](https://github.com/SesameAILabs/csm) for Hugging Face Transformers. It has support for training with HF `Trainer` (with [decoder training amortization](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#:~:text=The%20audio%20decoder%20is%20trained%20on%20only%20a%20random%201/16%20subset%20of%20the%20audio%20frames%2C%20while%20the%20zeroth%20codebook%20is%20trained%20on%20every%20frame.)) as well as generation. Finetuning is possible with 24GB ram (2048 frames seq_len, batch size 1, but gradient accumulation is supported for larger effective batch sizes). For now, generation seems to be slower than realtime (tested with NVIDIA RTX A5000), but I'm hopeful the model can be further optimized. In any case this code can always be used for training only, with possibility of using finetuned weights with different inference code or engines. LoRA/PEFT support is on the roadmap, let me know if that is something that would benefit your use case.
2025-04-04T19:24:56
https://github.com/thomasgauthier/csm-hf/
hurrytewer
github.com
1970-01-01T00:00:00
0
{}
1jrkbg0
false
null
t3_1jrkbg0
/r/LocalLLaMA/comments/1jrkbg0/presenting_csmhf_sesame_csm_reimplemented_for/
false
false
default
62
null
Urgent help needed in finetuning gemma3 for medical discharge note generation
1
[removed]
2025-04-04T19:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1jrkfz7/urgent_help_needed_in_finetuning_gemma3_for/
Busy-Okra140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrkfz7
false
null
t3_1jrkfz7
/r/LocalLLaMA/comments/1jrkfz7/urgent_help_needed_in_finetuning_gemma3_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
Local LLMs are essential in a world where LLM platforms are going to get filled with ads
361
2025-04-04T20:16:43
https://privacyinternational.org/long-read/5472/chatbots-adbots-sharing-your-thoughts-advertisers
TechExpert2910
privacyinternational.org
1970-01-01T00:00:00
0
{}
1jrljxa
false
null
t3_1jrljxa
/r/LocalLLaMA/comments/1jrljxa/local_llms_are_essential_in_a_world_where_llm/
false
false
https://a.thumbs.redditm…VRXonxGVGAL4.jpg
361
{'enabled': False, 'images': [{'id': 'kgBTLcNlFRIw00hrmDJGV6nW-t_WIOwwprySDutaKCQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?width=108&crop=smart&auto=webp&s=89bb146454089b208789c11afaf71f46bd791297', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?width=216&crop=smart&auto=webp&s=6eb8f31b06f490eb818b9c19776d0e5ed3efc772', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?width=320&crop=smart&auto=webp&s=a262fc25e81191a9c31aec0643f86f3a3b171bbd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?width=640&crop=smart&auto=webp&s=4b34c3c22bfaf136ee38db8c12bf0b99aa321162', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?width=960&crop=smart&auto=webp&s=eb3de35959fc41ecf284882734bc52f5d313b209', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/VS1VAWyp01_knDCN9Cj6--BP1C9fvqcewix3-g1QeRg.jpg?auto=webp&s=d31fe5a9cfb4eab60711c34df6d14c263eaceb80', 'width': 1000}, 'variants': {}}]}
How powerful do you think Llama 4 will be? How will it compare to Llama 3, Qwen2.5, and Gemma?
0
How powerful do you think Llama 4 will be? How will it compare to Llama 3, Qwen2.5, and Gemma? How much smarter will it be? Benchmarks? And how many tokens do you think Meta has trained this model on? (Llama 3 was trained on 15T Tokens)
2025-04-04T20:21:35
https://www.reddit.com/r/LocalLLaMA/comments/1jrlo0v/how_powerful_do_you_think_llama_4_will_be_how/
CreepyMan121
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrlo0v
false
null
t3_1jrlo0v
/r/LocalLLaMA/comments/1jrlo0v/how_powerful_do_you_think_llama_4_will_be_how/
false
false
self
0
null
Gamma it's training date is contaminated with Open ai data?
1
System prompt: Bo ta un asistente servicial.
2025-04-04T20:24:39
https://i.redd.it/ucxgc778nvse1.png
Far_Buyer_7281
i.redd.it
1970-01-01T00:00:00
0
{}
1jrlqlu
false
null
t3_1jrlqlu
/r/LocalLLaMA/comments/1jrlqlu/gamma_its_training_date_is_contaminated_with_open/
false
false
https://a.thumbs.redditm…Hj99U8j5Kfv0.jpg
1
{'enabled': True, 'images': [{'id': '30Ofog1vilKxRxixmcZgPdWoM-u9-IFbAIc2hgvlD2M', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=108&crop=smart&auto=webp&s=983c0ff310f91a90310e4d8339495cb18c595374', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=216&crop=smart&auto=webp&s=e133a01c7652c3373a3472c5084fa6617456478f', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=320&crop=smart&auto=webp&s=56010d6fa32deabd375b0487ad2de9b2f176271e', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=640&crop=smart&auto=webp&s=ab1391e2d6c9452496c2f78b0eb720dc017ad75f', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=960&crop=smart&auto=webp&s=fcbab24db978780d57e592d1dd8ed6b348fa618c', 'width': 960}, {'height': 568, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?width=1080&crop=smart&auto=webp&s=ec15fe829c6e112ffdf3f7f0ef00f1487497ba82', 'width': 1080}], 'source': {'height': 992, 'url': 'https://preview.redd.it/ucxgc778nvse1.png?auto=webp&s=9390a897d3c58b8bce2ef4b4147cedda1234d17e', 'width': 1886}, 'variants': {}}]}
Gemma 27B it's training data is contaminated with Open ai data?
0
The system prompt is: Bo ta un asistente servicial.
2025-04-04T20:26:43
https://i.redd.it/vslb0qbtnvse1.png
Far_Buyer_7281
i.redd.it
1970-01-01T00:00:00
0
{}
1jrlsaz
false
null
t3_1jrlsaz
/r/LocalLLaMA/comments/1jrlsaz/gemma_27b_its_training_data_is_contaminated_with/
false
false
https://b.thumbs.redditm…vgWmDlEaIjgI.jpg
0
{'enabled': True, 'images': [{'id': '_hOS06GQ_GmWGNmVAVf-JeOJ9lsibIFYhKcQ1Bijpt8', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=108&crop=smart&auto=webp&s=7b33422379be9cc87482e426f04a1c8febcaf75d', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=216&crop=smart&auto=webp&s=10c2f252d7a71d39fec518fbc106fd5bb9a21ad0', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=320&crop=smart&auto=webp&s=8479343c07150192883d9ed9f19c597b84af4176', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=640&crop=smart&auto=webp&s=e2e302038ee3486811cf0d0b0fe007f1c19e6b33', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=960&crop=smart&auto=webp&s=bd09ccb8270f073829af28f274d5e3c7c93b65ca', 'width': 960}, {'height': 568, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?width=1080&crop=smart&auto=webp&s=a82f8387887934d9a452b50b35d225052168fd73', 'width': 1080}], 'source': {'height': 992, 'url': 'https://preview.redd.it/vslb0qbtnvse1.png?auto=webp&s=2aca14f519273b62c5990fafe44a04c46dc18255', 'width': 1886}, 'variants': {}}]}
Quasar Alpha = OpenAI All-in-One Model
1
[removed]
2025-04-04T20:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1jrlyrm/quasar_alpha_openai_allinone_model/
Spirited_Salad7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrlyrm
false
null
t3_1jrlyrm
/r/LocalLLaMA/comments/1jrlyrm/quasar_alpha_openai_allinone_model/
false
false
https://b.thumbs.redditm…GN4oAm-FiINM.jpg
1
{'enabled': False, 'images': [{'id': 'Z2hUMZU7ReMFyeOLTpukfOJFpvwlU6RCvEnKlCK9ZC0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=108&crop=smart&auto=webp&s=db1e934f26cd82d303a9cb792573c29fb7453139', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=216&crop=smart&auto=webp&s=d207ea0d89a0bd5b349fc2104aead8385f33a8eb', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=320&crop=smart&auto=webp&s=eb078ca3590523758dc021b1c4f3c1370395c399', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=640&crop=smart&auto=webp&s=b8666f2747f04948de7ff2a504382d71411f8436', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=960&crop=smart&auto=webp&s=10d0c1332b776616d662739e1fe36edadf7b1d82', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?width=1080&crop=smart&auto=webp&s=24baaa5cb423b1d6c9156e67f4f59d0e02eebfc8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YL_8uKKhUSG3F6Ko14gfm_oKDkXvqXi_U7QZzorUoUw.jpg?auto=webp&s=8ed8b640f73f97426d28d24558cb4313eedb47d6', 'width': 1200}, 'variants': {}}]}
Quasar Alpha = OpenAI All-in-One Model
0
Add "think step by step" to your prompt when using this model—it routes it to the reasoning model. I remember OpenAI was trying to merge all of its models into one. Other posts have discussed how it makes the same mistakes as the OpenAI model does in Chinese responses. https://preview.redd.it/7o6ukm2upvse1.png?width=860&format=png&auto=webp&s=63a352af6ccbc0ac134333aa4c39b51a36dc8030 https://preview.redd.it/avpa9jqupvse1.png?width=850&format=png&auto=webp&s=e000f6817a950f7d0f3f0b601eef9db5e7961722
2025-04-04T20:37:32
https://www.reddit.com/r/LocalLLaMA/comments/1jrm1c5/quasar_alpha_openai_allinone_model/
Spirited_Salad7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrm1c5
false
null
t3_1jrm1c5
/r/LocalLLaMA/comments/1jrm1c5/quasar_alpha_openai_allinone_model/
false
false
https://b.thumbs.redditm…PzJlDCoWpgXE.jpg
0
null
🧵 Looking for a FREE way to pair Perplexity Pro with an agentic AI coding tool (like Cursor, Windsurf, etc.)
0
Hey folks, I have a **Perplexity Pro** subscription (which I love), but I’m trying to achieve a **fully autonomous, agentic coding workflow** — something that can handle iterative development, file edits, and refactors with minimal manual effort. However, I don’t want to pay for tools like **Cursor Pro** or any premium IDEs. # 🔍 What I'm looking for: * A **free AI-powered IDE** or setup that can complement Perplexity Pro * Something like **Cursor or Windsurf**— but fully free * Ideally supports agent-like behavior: breaking down tasks, coding in files, editing locally/cloud, etc. # 🧠 My stack right now: * ✅ Perplexity Pro (main LLM brain) * ❌ No paid IDE (Cursor, Warp AI, etc.) * ✅ Open to use: Replit, Codeium, VS Code, AutoGen, OpenDevin, etc. # 🎯 Goal: Just want to **vibe and code** — minimal copy-pasting, maximum flow. Think: give a prompt → agent does the heavy lifting → I review/improve.
2025-04-04T21:20:41
https://www.reddit.com/r/LocalLLaMA/comments/1jrn1gy/looking_for_a_free_way_to_pair_perplexity_pro/
rajat_sethi28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrn1gy
false
null
t3_1jrn1gy
/r/LocalLLaMA/comments/1jrn1gy/looking_for_a_free_way_to_pair_perplexity_pro/
false
false
self
0
null
Quasar Alpha (OpenAI open source model?) feels like a very solid model, but if its SOTA is not by much
27
2025-04-04T21:24:05
https://v.redd.it/vzi2qx62yvse1
sirjoaco
/r/LocalLLaMA/comments/1jrn498/quasar_alpha_openai_open_source_model_feels_like/
1970-01-01T00:00:00
0
{}
1jrn498
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vzi2qx62yvse1/DASHPlaylist.mpd?a=1746523455%2CZjE5OWJlNmRiY2E5NjdiNTE1ZjQ3ZDM2NzE0M2EwNGU5NzU3MzUyZDg5NmY1MjkxMWMyNmRlN2ZiMTU2ZGNmMg%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/vzi2qx62yvse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/vzi2qx62yvse1/HLSPlaylist.m3u8?a=1746523455%2COWYzYWZiZmYwN2UzYzAxMDlhMGQ3OTBjODBhOTQ4YWEyOTY0NjhiYjllN2QzNTIxYzBiYWYxMmIxZGE4OTlkNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vzi2qx62yvse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1766}}
t3_1jrn498
/r/LocalLLaMA/comments/1jrn498/quasar_alpha_openai_open_source_model_feels_like/
false
false
https://external-preview…92748d11eb3f1a57
27
{'enabled': False, 'images': [{'id': 'cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=108&crop=smart&format=pjpg&auto=webp&s=ae0861263ab0e721ef1f4b00092e134ea171d932', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=216&crop=smart&format=pjpg&auto=webp&s=885a8c0baf3a615faab7333eb3e4016961a36d8c', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=320&crop=smart&format=pjpg&auto=webp&s=4b37b35bb8c457359a054835bdc1711c6d89b425', 'width': 320}, {'height': 391, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=640&crop=smart&format=pjpg&auto=webp&s=68e8c3aa353f60f5c488275d8b80dfdf7a50ff7e', 'width': 640}, {'height': 586, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=960&crop=smart&format=pjpg&auto=webp&s=eeef0cd7be63fd1a64d3f3073e9e7fd65ceb7f96', 'width': 960}, {'height': 660, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ab570a4f4f2397262581546b4e6bd1b0114cca9d', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://external-preview.redd.it/cWxoZTA2NTJ5dnNlMdPlOzjBh06Ls33QYFMnUJe9_SHp-HPYT5zE-TBYfw-C.png?format=pjpg&auto=webp&s=7cb579a242c30d26c87ece83ba21e7ae5ddc97ee', 'width': 3098}, 'variants': {}}]}
Mast Beutiful Girl Sex Videos in Room: Amateur Porn feat. Easypron1 | xHamster
1
2025-04-04T21:39:50
https://xhamster43.desi/videos/beutiful-girl-sex-videos-in-room-xh4bvtGn?utm_source=ext_shared&utm_medium=referral&utm_campaign=rd
Proper-Prune-7797
xhamster43.desi
1970-01-01T00:00:00
0
{}
1jrnh5l
false
null
t3_1jrnh5l
/r/LocalLLaMA/comments/1jrnh5l/mast_beutiful_girl_sex_videos_in_room_amateur/
false
false
default
1
null
Whats the current best abliterated/uncensored model?
35
There is not much more to say to be honest. Got a 5090 and want to experiment with bigger weights than when I just gad 8gb.
2025-04-04T21:59:18
https://www.reddit.com/r/LocalLLaMA/comments/1jrnx1z/whats_the_current_best_abliterateduncensored_model/
majorfrankies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrnx1z
false
null
t3_1jrnx1z
/r/LocalLLaMA/comments/1jrnx1z/whats_the_current_best_abliterateduncensored_model/
false
false
self
35
null
open source prompting agent? How to prompt AI to generate system role and user message templates?
0
I give my insights in advance so maybe you can share yours too: Below my mantras for solving problems for known problems: \--- in 2023 i abused [CO-STAR](https://towardsdatascience.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41/), # ### CONTEXT ### # ### OBJECTIVE ### # ### STYLE ### # ### TONE ### # ### AUDIENCE ### # ### RESPONSE ### above template with mixtral, miqu or gpt4 felt like a magic wand. experiments with [Chain of Density](https://arxiv.org/pdf/2309.04269), especially with [Outlines](https://dottxt-ai.github.io/outlines/latest/cookbook/chain_of_density/) and Qwen 32B made me earn the most enjoyable money in my entire life. over 99% accuracy on evals which was far superior to human workers (extremely tedious tasks automated) \--- for open ended problems I tend to use mermaid.js mindmaps and use LLMs to somehow traverse those nodes. but it is complex to implement and when i'm tired i'm unable to run that efficiently. \--- lately output limits increased from 2k/4k to 65k (or more?) and i shifted again towards big prompts and fine grained prompts but this feels like terrible idea as now i solve much less problems than with worse models few months ago. How do you prompt LLMs when you are looking for solutions? do you use any prompt generators? like [this one from Anthropic](https://colab.research.google.com/drive/1SoAajN8CBYTl79VyTwxtxncfCWlHlyy9)? prompt optimizers? DPSy/AdalFlow? do you know any solutions for next-level crawling, scraping, extraction? like [trafilatura](https://trafilatura.readthedocs.io/en/latest/), [firecrawl](https://github.com/mendableai/firecrawl) or [browser-use](https://github.com/browser-use/browser-use) How do you integrate VLMs? Do you use different/newer/better prompts to solve image/video/audio problems? \--- I build [Harpagan](https://harpagan.com/) lately. Before that i created SEO workflows similar to [Clay.com](http://Clay.com) but for marketing blog posts. Before SEO i did sales automation/intelligence projects with focus mostly on outbound activities. as open source community i think we truly need cline/aider like agent for prompt writing. system role, output schemas, evals - like a game that will make us less focused on writing prompts itself and focus more on solving problems? do you know any open source prompting agents? How about we [build one](https://github.com/dontriskit/prompter/)?
2025-04-04T22:11:37
https://www.reddit.com/r/LocalLLaMA/comments/1jro77u/open_source_prompting_agent_how_to_prompt_ai_to/
secopsml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jro77u
false
null
t3_1jro77u
/r/LocalLLaMA/comments/1jro77u/open_source_prompting_agent_how_to_prompt_ai_to/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nlPRU-AQwprnnGWooZGFAKmguoEFHU-Ntc9I8GY8MIc', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?width=108&crop=smart&auto=webp&s=f4df95af14cdb76d4f444939ac5c5206e16b2669', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?width=216&crop=smart&auto=webp&s=425a9a0ac4261a3ef0b97aea45643bd897c38566', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?width=320&crop=smart&auto=webp&s=1afe214893b18600c64a997040b4889426a76cdc', 'width': 320}, {'height': 513, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?width=640&crop=smart&auto=webp&s=6774fc61d91ffd9eb70434cbe8bd80b9e0bcd3ec', 'width': 640}, {'height': 770, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?width=960&crop=smart&auto=webp&s=155778cd5cdbfbc6847dcee72c71191fbb47a6ea', 'width': 960}], 'source': {'height': 831, 'url': 'https://external-preview.redd.it/p-ECXB9WThzQfQa5GCpwWMuz_npsMpAJ-tN56TuNA2c.jpg?auto=webp&s=461fd29bc41b5ff54f64289c07064bba37d72ea0', 'width': 1036}, 'variants': {}}]}
Found an awesome repo listing more than 2000+ MCP servers
34
Just came across this GitHub repo and thought it was worth sharing with folks here: [https://github.com/TensorBlock/awesome-mcp-servers](https://github.com/TensorBlock/awesome-mcp-servers) I’d love to hear from anyone if is using MCP in production or building cool things around it, super hype on this track recently
2025-04-04T22:31:39
https://www.reddit.com/r/LocalLLaMA/comments/1jromm0/found_an_awesome_repo_listing_more_than_2000_mcp/
Vivid-Cover8921
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jromm0
false
null
t3_1jromm0
/r/LocalLLaMA/comments/1jromm0/found_an_awesome_repo_listing_more_than_2000_mcp/
false
false
self
34
{'enabled': False, 'images': [{'id': 'aGGG4Uxholv_yarKUashN6MTNT-ip9HA42XJ4j8a4fA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=108&crop=smart&auto=webp&s=6f3e6a20b78714c3b101a31d9b5a2829edeb3dfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=216&crop=smart&auto=webp&s=07b74aad5e6575dbe56e882edda76c4dc82d3caa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=320&crop=smart&auto=webp&s=dc0d1a33c6652f98072faca36770c15d65825ea5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=640&crop=smart&auto=webp&s=82fca4a2d707bb968ca5fd6be799ef37a0bce913', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=960&crop=smart&auto=webp&s=c30deb829862870620513ea64d0c9657e816e145', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?width=1080&crop=smart&auto=webp&s=36b506cf4447facc50d655784cb274ff92610dfe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jXHUzB_a1ogm7BCCERzJdHAA-wluaM2llggofriEXuQ.jpg?auto=webp&s=fb375990e6832c026bcf8e3ae7480ebf0643edd1', 'width': 1200}, 'variants': {}}]}
Not GPT-4, but a 3B Function Calling LLM that can chat to clarify tools calls
77
Excited to have recently released [Arch-Function-Chat](https://huggingface.co/katanemo/Arch-Function-Chat-3B) A collection of fast, device friendly LLMs that achieve performance on-par with GPT-4 on function calling, now trained to chat. Why chat? To help gather accurate information from the user before triggering a tools call (manage context, handle progressive disclosure, and also respond to users in lightweight dialogue on execution of tools results). The model is out on HF, and the work to integrate it in [https://github.com/katanemo/archgw](https://github.com/katanemo/archgw) should be completed by Monday - we are also adding to support to integrate with tools definitions as captured via MCP in the upcoming week, so combining two releases in one. Happy building 🙏
2025-04-04T23:03:50
https://v.redd.it/i9hd03p7fwse1
AdditionalWeb107
v.redd.it
1970-01-01T00:00:00
0
{}
1jrpbj8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i9hd03p7fwse1/DASHPlaylist.mpd?a=1746399853%2CMThjMzc1ZTNmYzkwNjFkYTRjYTJmOWM4YjMyMzY4ZTA5OTAzNDIyYTQyMTYwOThkNzg1MDljYTQ1YjUxZmQ0Yw%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/i9hd03p7fwse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/i9hd03p7fwse1/HLSPlaylist.m3u8?a=1746399853%2CYjUyODEyYTRlZjBiYjMwYmQ3NGQ0MDkxMDllNzUzMDVjYTBiY2RkNWI0MTg0Y2Y5ODY3MWQyM2I4OWI3OGQ0Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i9hd03p7fwse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1832}}
t3_1jrpbj8
/r/LocalLLaMA/comments/1jrpbj8/not_gpt4_but_a_3b_function_calling_llm_that_can/
false
false
https://external-preview…324945d1b1cc2a78
77
{'enabled': False, 'images': [{'id': 'NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=108&crop=smart&format=pjpg&auto=webp&s=288b181779df8446acd013ac7eaf8819fee967bc', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=216&crop=smart&format=pjpg&auto=webp&s=2a6969125e2a929738788f78dec2eb66682d131a', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=320&crop=smart&format=pjpg&auto=webp&s=dfe30ac5dc397b07ee27b25c4d9fb6a8b6d34078', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=640&crop=smart&format=pjpg&auto=webp&s=3dc1b84452d560d1ddb4335b3a0ce9539199b4d0', 'width': 640}, {'height': 565, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=960&crop=smart&format=pjpg&auto=webp&s=64aad4afd871199cb00a311fc1d76b7b8ef50357', 'width': 960}, {'height': 636, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a2861a5ccce0279ee2c154b64458c10136df010a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NXFyNTkzcDdmd3NlMU9tL6wO_KLELRMymDQTLDBHY2EFrYZp7zEuTqyq1mYt.png?format=pjpg&auto=webp&s=8026cd98340d8546b1b624ef0a0c4ad757e8a171', 'width': 1832}, 'variants': {}}]}
Framework Desktop development units for open source AI developers
132
Apologies in advance if this pushes too far into self-promotion, but when we launched Framework Desktop, AMD also announced that they would be providing 100 units to open source developers based in US/Canada to help accelerate local AI development. The application form for that is now open at [https://www.amd.com/en/forms/sign-up/framework-desktop-giveaway.html](https://www.amd.com/en/forms/sign-up/framework-desktop-giveaway.html) I'm also happy to answer questions folks have around using Framework Desktop for local inference.
2025-04-04T23:52:01
https://www.reddit.com/r/LocalLLaMA/comments/1jrqb11/framework_desktop_development_units_for_open/
cmonkey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrqb11
false
null
t3_1jrqb11
/r/LocalLLaMA/comments/1jrqb11/framework_desktop_development_units_for_open/
false
false
self
132
{'enabled': False, 'images': [{'id': 'VY-gY2HYKYcXNyATN6eFOZXOvhwsQtwwRY3tWSlZJMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=108&crop=smart&auto=webp&s=bf9ed3573a3db5d3e44a72830f8426517a91377c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=216&crop=smart&auto=webp&s=f524b797a7ea6865e919d4617ae1dcf2b6c6a2af', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=320&crop=smart&auto=webp&s=10a67c110f3a769d2bfe608f2b3a86f3166e246f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=640&crop=smart&auto=webp&s=55af9887767b7ab9df9c7ca842d03265592ce4ea', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=960&crop=smart&auto=webp&s=d292a516385f47874c32d38cd8df66e9fb7ad712', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=1080&crop=smart&auto=webp&s=5b59e2297e5600bee5b6085195e17dda1c43324f', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?auto=webp&s=c9e47dc7ead939c14e8582ec6e213ea7b7903190', 'width': 1200}, 'variants': {}}]}
Containerized Voice Identification with Resemblyzer & QdrantDB
11
2025-04-05T00:10:46
https://codingwithcody.com/2025/04/02/containerized-voice-identification/
codysnider
codingwithcody.com
1970-01-01T00:00:00
0
{}
1jrqoq7
false
null
t3_1jrqoq7
/r/LocalLLaMA/comments/1jrqoq7/containerized_voice_identification_with/
false
false
https://b.thumbs.redditm…XL4lv5jcyByA.jpg
11
{'enabled': False, 'images': [{'id': 'N7M_c8-wsQ_sCNeOr_RWb_hZIheT7hWrf9GCKygG_50', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?width=108&crop=smart&auto=webp&s=c62ba75553b6375fedfb0551819ee74440f9e030', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?width=216&crop=smart&auto=webp&s=64c0eec933d018fa658447dacbeae92cc6d52395', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?width=320&crop=smart&auto=webp&s=2f6d6660cda9185e771357fb5d5c28c1dfde0c10', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?width=640&crop=smart&auto=webp&s=37d0967006b4cf66ff4138f0e94e93e15041174e', 'width': 640}, {'height': 369, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?width=960&crop=smart&auto=webp&s=9845d9437eed968df93d1eb24f77f8f80e6a245b', 'width': 960}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/CT6b2Nm781PqEI0GPngbEDd5OX3pXY0Pycj0DVQwBoA.jpg?auto=webp&s=528b7fad00ec4b868f0cb23a30fa0fa688dcc2d4', 'width': 1040}, 'variants': {}}]}
Image --> Talking head real time from your live feed camera
0
Basically you have an image of someone and you feed your camera to animate that image into a talking head in real time. I saw a video of this on Twitter recently but lost it, anyone can help me out, any open source models for this? Seems quite new.
2025-04-05T00:24:24
https://www.reddit.com/r/LocalLLaMA/comments/1jrqyhk/image_talking_head_real_time_from_your_live_feed/
83yWasTaken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrqyhk
false
null
t3_1jrqyhk
/r/LocalLLaMA/comments/1jrqyhk/image_talking_head_real_time_from_your_live_feed/
false
false
self
0
null
is there really small uncensored model for nsfw erp?
0
Hey, i tried L3-8B-Stheno-v3.2-exl2_8.0bpw but even that's too big for my gtx 1650 ti laptop, can anyone suggest me smaller model trained for erp thingies?
2025-04-05T00:35:50
https://www.reddit.com/r/LocalLLaMA/comments/1jrr6fy/is_there_really_small_uncensored_model_for_nsfw/
ergenveled
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrr6fy
false
null
t3_1jrr6fy
/r/LocalLLaMA/comments/1jrr6fy/is_there_really_small_uncensored_model_for_nsfw/
false
false
nsfw
0
null
If I put together an 3090 Ti (24 GB) + 4070 Ti Super (16 GB) + 5060 Ti (16GB), how slow things will get because of the 5060 Ti?
9
I'm thinking about getting a 5060 Ti for extra 16 GB CUBLAS VRAM juice. How slow do you think things will turn, because of this slower GPU? My CPU is already slow (11700).. Thnaks in advance
2025-04-05T01:33:08
https://www.reddit.com/r/LocalLLaMA/comments/1jrs9fl/if_i_put_together_an_3090_ti_24_gb_4070_ti_super/
GoodSamaritan333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrs9fl
false
null
t3_1jrs9fl
/r/LocalLLaMA/comments/1jrs9fl/if_i_put_together_an_3090_ti_24_gb_4070_ti_super/
false
false
self
9
null
Which model can detect what lamguage is user text?
1
[removed]
2025-04-05T01:51:47
https://www.reddit.com/r/LocalLLaMA/comments/1jrslo9/which_model_can_detect_what_lamguage_is_user_text/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrslo9
false
null
t3_1jrslo9
/r/LocalLLaMA/comments/1jrslo9/which_model_can_detect_what_lamguage_is_user_text/
false
false
self
1
null
I am wondering if any AI can act like a real person? I mean not just one sentence by one sentence?
1
[removed]
2025-04-05T03:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1jru4c1/i_am_wondering_if_any_ai_can_act_like_a_real/
Electrical-Taro-4058
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jru4c1
false
null
t3_1jru4c1
/r/LocalLLaMA/comments/1jru4c1/i_am_wondering_if_any_ai_can_act_like_a_real/
false
false
self
1
null
Concept Processing Prompts: Engineering a Universal Foundation for Any AI Novel
0
2025-04-05T03:26:45
https://medium.com/@SomethingaboutAI/concept-processing-prompts-engineering-a-universal-foundation-for-any-ai-novel-4af85c929890
drnick316
medium.com
1970-01-01T00:00:00
0
{}
1jruapo
false
null
t3_1jruapo
/r/LocalLLaMA/comments/1jruapo/concept_processing_prompts_engineering_a/
false
false
https://a.thumbs.redditm…4Gh4fllDon78.jpg
0
{'enabled': False, 'images': [{'id': 'hWMn5WgvOxm25cq2u0ll4cbhiSm5X0JAO09oIlg9258', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=108&crop=smart&auto=webp&s=34ef38ff6307520e0ad81bdabccac39da86cf8c6', 'width': 108}, {'height': 324, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=216&crop=smart&auto=webp&s=eb3b9a2db1be9de4b2f6261ec3a0ca5eceee8f77', 'width': 216}, {'height': 480, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=320&crop=smart&auto=webp&s=e1a39e829bd94a0f3b3caba9c5e37ff4cfdba693', 'width': 320}, {'height': 960, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=640&crop=smart&auto=webp&s=6d819f575c9de68a0ce7e9e69c9d121dfc5cb3aa', 'width': 640}, {'height': 1440, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=960&crop=smart&auto=webp&s=15ee05ba5031368236200c13199071327b53b67b', 'width': 960}, {'height': 1620, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?width=1080&crop=smart&auto=webp&s=8efe3fa541f7fe1cc8fa1dbb2af85ab78c48d1ca', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://external-preview.redd.it/OZGCUmg6A3xI6RN_PjdVOeRxaG_B5-WkpqzaBTDeuSk.jpg?auto=webp&s=8d9bed56869874084f59da9bb7f822111e4d269b', 'width': 1200}, 'variants': {}}]}
vscode edit mode
1
I’m a backend dev and have been using prompts to help me put together small front end projects. Recently discovered vscode edit mode too, very neat. At work we have a paid subscription for copilot, but for personal use I’m on the free tier. Found by accident that the Copilot extension can hook into Ollama too, so I tried that with the models I already downloaded. Particularly with the deepseek models “edit” mode didn’t seem to work, it was just like a normal chat prompt, other models seemed ok though. Am I doing something wrong? Or are all these things just a bit rough around the edges so stuff just doesn’t play together nicely sometimes?
2025-04-05T03:52:09
https://www.reddit.com/r/LocalLLaMA/comments/1jruqnf/vscode_edit_mode/
peek-ah-boo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jruqnf
false
null
t3_1jruqnf
/r/LocalLLaMA/comments/1jruqnf/vscode_edit_mode/
false
false
self
1
null
I made an open source AI-Powered Story Generator Designed for the Raspberry PI & Inky eink display.
5
https://preview.redd.it/…Nogin/storytime)
2025-04-05T03:54:55
https://www.reddit.com/r/LocalLLaMA/comments/1jrus9w/i_made_an_open_source_aipowered_story_generator/
Erdeem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrus9w
false
null
t3_1jrus9w
/r/LocalLLaMA/comments/1jrus9w/i_made_an_open_source_aipowered_story_generator/
false
false
https://b.thumbs.redditm…Fh2-obmyfXts.jpg
5
{'enabled': False, 'images': [{'id': '0QpN8PhBUGuEwDA2bExMlAH7AS9HY2meRkT4pp0ymQ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=108&crop=smart&auto=webp&s=b3a08e124245a62d8f8c62834ddc7b74e1ed6da5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=216&crop=smart&auto=webp&s=e9d1be9cf463827d34ef6fbc2bbe6b5f34d4506f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=320&crop=smart&auto=webp&s=582f30156bacd067488121a867179fc28793d55d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=640&crop=smart&auto=webp&s=e4325df17a54939f314834f0a945b449c14117b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=960&crop=smart&auto=webp&s=3adf2fa8d2f9e3821c02f0e3872b2e3c0cad127f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?width=1080&crop=smart&auto=webp&s=7507f92aa1606c5c076bb9a9ee93b7314a5488c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rGLtpj7d4GEMTL-9MxQvYJU2wqeLfCWtwSxa59ausDM.jpg?auto=webp&s=ec870d588f4bab3b74b09f8a9a2c57f47fbe95bc', 'width': 1200}, 'variants': {}}]}
Can my potato laptop run any llm?
1
[removed]
2025-04-05T03:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1jruv5t/can_my_potato_laptop_run_any_llm/
First_Diamond5721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jruv5t
false
null
t3_1jruv5t
/r/LocalLLaMA/comments/1jruv5t/can_my_potato_laptop_run_any_llm/
false
false
self
1
null
how to let a model browse my files, internet or use the terminal?
1
I'm using the alpaca flatpak on fedora.
2025-04-05T04:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1jrv8um/how_to_let_a_model_browse_my_files_internet_or/
headlessBleu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrv8um
false
null
t3_1jrv8um
/r/LocalLLaMA/comments/1jrv8um/how_to_let_a_model_browse_my_files_internet_or/
false
false
self
1
null
What is best small long-context open-weight model now?
1
I know there are benchmarks, but I ask for your personal experience. My narrow use case is to analyze logs.
2025-04-05T04:25:19
https://www.reddit.com/r/LocalLLaMA/comments/1jrvb0k/what_is_best_small_longcontext_openweight_model/
EmilPi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrvb0k
false
null
t3_1jrvb0k
/r/LocalLLaMA/comments/1jrvb0k/what_is_best_small_longcontext_openweight_model/
false
false
self
1
null
ibm-granite/granite-speech-3.2-8b · Hugging Face
101
Granite-speech-3.2-8b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). License: Apache 2.0
2025-04-05T04:37:00
https://huggingface.co/ibm-granite/granite-speech-3.2-8b
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1jrvhwk
false
null
t3_1jrvhwk
/r/LocalLLaMA/comments/1jrvhwk/ibmgranitegranitespeech328b_hugging_face/
false
false
https://b.thumbs.redditm…qSjvPwRG-Izw.jpg
101
{'enabled': False, 'images': [{'id': 't-yIaWbqlXk2QELONRT5KsroXm5AV-S5okS4NTEBz0k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=108&crop=smart&auto=webp&s=ffb379163bb5634899e1e504aa491f01f8c4fd7e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=216&crop=smart&auto=webp&s=fb888b4d81266e512d587d468020f6e095b58d4e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=320&crop=smart&auto=webp&s=41f15695a3866cab67aade85fd5d0b05d66d88da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=640&crop=smart&auto=webp&s=4fd879df27c798df53dc6f4fb0ddbd427697e30d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=960&crop=smart&auto=webp&s=06151332b05096e435f083ba6630b6d2c565dc77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?width=1080&crop=smart&auto=webp&s=b7ef1f6082b98129de5def266afcfed8e9016ec3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7tIq7fmEQHkDDxZGysXqAxxJULRvzvho5Jaa29Tj9zc.jpg?auto=webp&s=86f8060d2e95438362a486f690510694ce2a5c2d', 'width': 1200}, 'variants': {}}]}
Framework Desktop vs e.g. Tuxedo Pro L
1
I am a long term Mac Users, so my hardware knowledge is a bit outdated. I really like the Framework Desktop, but I don't necessarily need the compact size. Can someone make a guess how the FW Desktop (*Ryzen™ AI Max+ 395 - 128GB*) would compare to the following specs for running LLMs? * Intel Core i9-14900(K or no K) with * either 192 GB DDR5 DIMM-5200 (without dedicated GPU) * or 96 GB + AMD Radeon RX 7700 XT (12 GB) with the option to add more RAM later * the board is not defined The pricing would be roughly the same.
2025-04-05T05:53:54
https://www.reddit.com/r/LocalLLaMA/comments/1jrwo5v/framework_desktop_vs_eg_tuxedo_pro_l/
trollbrot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrwo5v
false
null
t3_1jrwo5v
/r/LocalLLaMA/comments/1jrwo5v/framework_desktop_vs_eg_tuxedo_pro_l/
false
false
self
1
null
gemini-2.5-pro-preview-03-25 available for free (this an update of gemini-2.5-pro-exp-03-25)
30
Output SOTA reasoning traces to distill and SFT into Gemma 3! If you are a dev with a [https://console.cloud.google.com/](https://console.cloud.google.com/?fbclid=IwZXh0bgNhZW0CMTAAAR5s3bms1Q-L2JmxMY_DE5Ohrw8fdyW0zbYfVg7om-izxByB9nKMRtuOUgie4A_aem_SKgexjEITFNv3cOMh6tb6w) account with billing setup you will have FREE access to gemini-2.5-pro-preview-03-25 (an update that came out 20250404) through [https://aistudio.google.com/](https://aistudio.google.com/) even before it is available on [https://cloud.google.com/vertex-ai ](https://cloud.google.com/vertex-ai)
2025-04-05T06:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1jrwstn/gemini25propreview0325_available_for_free_this_an/
chitown160
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrwstn
false
null
t3_1jrwstn
/r/LocalLLaMA/comments/1jrwstn/gemini25propreview0325_available_for_free_this_an/
false
false
self
30
null
I built an open-source toolkit to turn Python functions into agent tools - with support for integration, observability, and management (OpenAI/LangChain/CrewAI compatible). Would love feedback!
1
[removed]
2025-04-05T06:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1jrxkod/i_built_an_opensource_toolkit_to_turn_python/
Fast-Split-857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrxkod
false
null
t3_1jrxkod
/r/LocalLLaMA/comments/1jrxkod/i_built_an_opensource_toolkit_to_turn_python/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iCwQpHeH9kpWWWFPF2Af2bbYKIS5os3DyWTzs_JdaP4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=108&crop=smart&auto=webp&s=91aa87813d07dcbab41ce21a732e8e5a4e0da512', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=216&crop=smart&auto=webp&s=35f3fb65ffcbf7efde1471a12410b1c719b79cec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=320&crop=smart&auto=webp&s=6ada602dad080484e27bf1d3dca2ff9a16c6c361', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=640&crop=smart&auto=webp&s=9fc38166d1549c2a1d4c06d04d0810b3490d729a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=960&crop=smart&auto=webp&s=dbffe926ef84f32e2a5ff79f4c6b68cee4ff0c07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=1080&crop=smart&auto=webp&s=43fd90e0e47111caf3e4aa97629d5be64e7b0548', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?auto=webp&s=a3f5fb0c71ffee649f878bf640829239a241e78c', 'width': 1200}, 'variants': {}}]}
OpenThinker2-32B
124
[https://huggingface.co/open-thoughts/OpenThinker2-32B](https://huggingface.co/open-thoughts/OpenThinker2-32B) https://preview.redd.it/1x9zxh5f7zse1.png?width=704&format=png&auto=webp&s=76c2b9c3676b9d9969d428b0a44cf823d4f72367
2025-04-05T08:21:23
https://www.reddit.com/r/LocalLLaMA/comments/1jryrik/openthinker232b/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jryrik
false
null
t3_1jryrik
/r/LocalLLaMA/comments/1jryrik/openthinker232b/
false
false
https://b.thumbs.redditm…RymjYfrndAqg.jpg
124
{'enabled': False, 'images': [{'id': 'jzFFY2CcWBpDKMPLwFfaBqzJQXd-dQ7sigYAwm96wGQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=108&crop=smart&auto=webp&s=b8c2570bb6f7e3f8113e98b2ae3b005a21241e1a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=216&crop=smart&auto=webp&s=99982f732a5b8f38d2d4ec578e844873d9552592', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=320&crop=smart&auto=webp&s=fcc42938780070794514f6783097b7a3a45fac13', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=640&crop=smart&auto=webp&s=b962a458fb32a0508ef1344c44f7b73075ef5245', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=960&crop=smart&auto=webp&s=f0a91c05b343de83e483374532ff7abf8f9c9a23', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?width=1080&crop=smart&auto=webp&s=c27be6aaad37ba72a27f0f8755a5eab2e026a712', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MMWrXSajlQDn44f7IjYIsEwZJ6CFMkYChFS5zuH6LYA.jpg?auto=webp&s=9c7af007074ac6e9a26dff25ea50cfddd5137407', 'width': 1200}, 'variants': {}}]}
Why is there no DeepSeek-V3-0324 distilled version?
1
[removed]
2025-04-05T08:39:58
https://www.reddit.com/r/LocalLLaMA/comments/1jrz0e4/why_is_there_no_deepseekv30324_distilled_version/
allked
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrz0e4
false
null
t3_1jrz0e4
/r/LocalLLaMA/comments/1jrz0e4/why_is_there_no_deepseekv30324_distilled_version/
false
false
self
1
{'enabled': False, 'images': [{'id': 'B4vBrJo-7ZM7567MPCYi4W_swmD3QA__8hqB8layVCU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=108&crop=smart&auto=webp&s=d701e7042af2a3febb3bb8fff11d5b17b939cca7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=216&crop=smart&auto=webp&s=eff04abbf7ee7b36cec85ca6268f152360c25fc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=320&crop=smart&auto=webp&s=6455dc4431b1a448e664a871cb04b1132b3facdb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=640&crop=smart&auto=webp&s=94d961b0b48a76bd398ef8e9a387f6a5087e577d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=960&crop=smart&auto=webp&s=f8919205d4c6667983f1f69e62f7b6f855e3992d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?width=1080&crop=smart&auto=webp&s=40a44167f6a1833c3d82fdfe5808355085b0a0dc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/L_MDAztp6gi49dQUv9vk2IeXw1OjSoBT_ooENnggvOg.jpg?auto=webp&s=662283069a3baa60da751ab92a358d941f2227f1', 'width': 1200}, 'variants': {}}]}
2 years progress on Alan's AGI clock
0
Alan D. Thompson is an AI expert, former Chairman of Mensa, and researcher tracking AGI progress. advises governments and corporations, and advocates for ethical AI and gifted education. His work is globally recognized.
2025-04-05T08:43:24
https://i.redd.it/8p2fb21dbzse1.png
BidHot8598
i.redd.it
1970-01-01T00:00:00
0
{}
1jrz23f
false
null
t3_1jrz23f
/r/LocalLLaMA/comments/1jrz23f/2_years_progress_on_alans_agi_clock/
false
false
https://b.thumbs.redditm…eVRqTYz0Kybo.jpg
0
{'enabled': True, 'images': [{'id': 'Zow7UrgVqkkS3mHF1cv_sNGJGc-tg9Jr3pS0wpjcDGk', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/8p2fb21dbzse1.png?width=108&crop=smart&auto=webp&s=9807179bbbba3d2f1cec494651ab54b4a133b753', 'width': 108}, {'height': 289, 'url': 'https://preview.redd.it/8p2fb21dbzse1.png?width=216&crop=smart&auto=webp&s=bbb81c777ef784e394694627715ea51f3ce14a57', 'width': 216}, {'height': 428, 'url': 'https://preview.redd.it/8p2fb21dbzse1.png?width=320&crop=smart&auto=webp&s=341d3b4c84cb138ea08a6e9a1dbd31d3e622cd90', 'width': 320}, {'height': 856, 'url': 'https://preview.redd.it/8p2fb21dbzse1.png?width=640&crop=smart&auto=webp&s=1093244004cbb47a8154ef66a5dfe66c8b379a5d', 'width': 640}], 'source': {'height': 1190, 'url': 'https://preview.redd.it/8p2fb21dbzse1.png?auto=webp&s=e9857d10598149e7261aa933f4fcf2bfe2ebac67', 'width': 889}, 'variants': {}}]}
nsfw orpheus - status update + discord
1
[removed]
2025-04-05T09:30:59
https://www.reddit.com/r/LocalLLaMA/comments/1jrzob3/nsfw_orpheus_status_update_discord/
MrAlienOverLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrzob3
false
null
t3_1jrzob3
/r/LocalLLaMA/comments/1jrzob3/nsfw_orpheus_status_update_discord/
false
false
nsfw
1
{'enabled': False, 'images': [{'id': 'QW_VIW06D0EnlouGxwmJnCVmpUaN0pVcfoOQDzqdyOc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=108&crop=smart&auto=webp&s=d5d798283363e7cf8f08f2a7bf3c4c874c7ac807', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=216&crop=smart&auto=webp&s=490ff23799bdb4dc16b304beac0521ef09e4e3b3', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=320&crop=smart&auto=webp&s=2233464835941a18730f55c263b0af194f898205', 'width': 320}], 'source': {'height': 286, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?auto=webp&s=b4d3ea330cc495dec6c7a9b6a7e7bf1494840c30', 'width': 512}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=709e4cb17cf21c5437dd39993fdb640bc6750bb4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=572083233ebb439b35455d9e45ffdec0ee8a26ca', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a6518b12c5b2825c4bc23ebd1ee3542cb7f7d2f4', 'width': 320}], 'source': {'height': 286, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?blur=40&format=pjpg&auto=webp&s=94010737f2c5d27d16da5f6bc0ca058f57c9aedd', 'width': 512}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=709e4cb17cf21c5437dd39993fdb640bc6750bb4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=572083233ebb439b35455d9e45ffdec0ee8a26ca', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a6518b12c5b2825c4bc23ebd1ee3542cb7f7d2f4', 'width': 320}], 'source': {'height': 286, 'url': 'https://external-preview.redd.it/3EJ5z4aojGlx43oFDsxuAAdk72UAsubZ0WOwMSgk6C0.jpg?blur=40&format=pjpg&auto=webp&s=94010737f2c5d27d16da5f6bc0ca058f57c9aedd', 'width': 512}}}}]}
nsfw orpheus tts - update (approaching week 3)
1
[removed]
2025-04-05T09:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1jrzqj6/nsfw_orpheus_tts_update_approaching_week_3/
MrAlienOverLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1jrzqj6
false
null
t3_1jrzqj6
/r/LocalLLaMA/comments/1jrzqj6/nsfw_orpheus_tts_update_approaching_week_3/
false
false
nsfw
1
null
I don't compare , I embrace - LLMs ,haha
0
...
2025-04-05T09:57:16
https://i.redd.it/4qo6u9bgozse1.png
internal-pagal
i.redd.it
1970-01-01T00:00:00
0
{}
1js00qh
false
null
t3_1js00qh
/r/LocalLLaMA/comments/1js00qh/i_dont_compare_i_embrace_llms_haha/
false
false
https://b.thumbs.redditm…YdtPWeeFrvec.jpg
0
{'enabled': True, 'images': [{'id': 'U6jIq6zoNaPQtumEywmaIMcFhhCjC3_-izieYfPrFLE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=108&crop=smart&auto=webp&s=7937b924c0f4ebe28b2160a6f7469c1b1058b1c2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=216&crop=smart&auto=webp&s=ed01d4c531dfd7e4e66eb3aa57a4d3e37034b5ee', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=320&crop=smart&auto=webp&s=afdc10af3eea0963d890d352f755c392ad58c93f', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=640&crop=smart&auto=webp&s=ca7b90c95e78d55d0712161b9c3c702a26a00a48', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=960&crop=smart&auto=webp&s=00596565a7797d168848dd9f3df6f56bd5908b2b', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?width=1080&crop=smart&auto=webp&s=0abb809205dc0bb1163ea03c2326037da47ee768', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/4qo6u9bgozse1.png?auto=webp&s=313f8b2bb4d6dc54547d6fe649a563334f25c9f6', 'width': 1366}, 'variants': {}}]}
Training LLM on books
4
Best way to train a llm or fine-tune based on books. Like label and knowing to recall and what to say. I guess it sounds more like a RAG, but I want to be able to create essays and writings (Not based on the books author or copy them) but rather learn about what makes the good writing, how they structure it, label that data so the LLM learns and create based on the learnings of the books. How would be the best way to approach this? Perhaps various agents one for rag and the other for streaming the chat and so on? Or given that now with Gemini we can get such a big context window we could just dump all in there (Even tho we can do that, it does sounds inneficient) Perhaps my system prompt could be a long list of all the learnings + agent to decide which learning to apply for that question or request. But an excessively long system could hinder more than help. Anyways, happy to read what the Local community has to say about.
2025-04-05T10:16:42
https://www.reddit.com/r/LocalLLaMA/comments/1js0aju/training_llm_on_books/
tonyblu331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js0aju
false
null
t3_1js0aju
/r/LocalLLaMA/comments/1js0aju/training_llm_on_books/
false
false
self
4
null
Tenstorrent Blackhole PCI-e cards with 32 GB of GDDR6 available for order
242
2025-04-05T10:27:38
https://tenstorrent.com/hardware/blackhole
Marcuss2
tenstorrent.com
1970-01-01T00:00:00
0
{}
1js0g38
false
null
t3_1js0g38
/r/LocalLLaMA/comments/1js0g38/tenstorrent_blackhole_pcie_cards_with_32_gb_of/
false
false
https://a.thumbs.redditm…GApmaGUCv7E4.jpg
242
{'enabled': False, 'images': [{'id': 'KCYvAPkQEkVmX4QqYreQJe2Mpq_40hogwrnlM3kIOUs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=108&crop=smart&auto=webp&s=8dbaa10f6a6215f9eab67f8f0f0b8b7a8080fba0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=216&crop=smart&auto=webp&s=8abac738e488aacbe308965ddb7c58bfabf3fdf9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=320&crop=smart&auto=webp&s=9eadfa22fdc9cf5e6a378d88fbb7d38895daad77', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=640&crop=smart&auto=webp&s=d47a097527e5e5e57498d3ac6192eb2f6741fe55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=960&crop=smart&auto=webp&s=0ed56ea2e3ffbad02f42ed278565c0252e63e4c7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?width=1080&crop=smart&auto=webp&s=ad1b2121cab929ff46fd72611c0e6a2ffcf14d61', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bKa6_zcgR56sbwG-Cqtu_jN8tcBni2YVCOOrskJ4IzI.jpg?auto=webp&s=0488807f36dd4d7a986ad166ffb25e22108256d1', 'width': 1200}, 'variants': {}}]}
Orpheus NSFW TTS - status
1
[removed]
2025-04-05T10:34:33
https://www.reddit.com/r/LocalLLaMA/comments/1js0jpe/orpheus_nsfw_tts_status/
MrAlienOverLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js0jpe
false
null
t3_1js0jpe
/r/LocalLLaMA/comments/1js0jpe/orpheus_nsfw_tts_status/
false
false
nsfw
1
null
Quick Comparison of QwQ and OpenThinker2 32B
67
Candle test: qwq: [https://imgur.com/a/c5gJ2XL](https://imgur.com/a/c5gJ2XL) ot2: [https://imgur.com/a/TDNm12J](https://imgur.com/a/TDNm12J) both passed \--- 5 reasoning questions: [https://imgur.com/a/ec17EJC](https://imgur.com/a/ec17EJC) qwq passed all questions ot2 failed 2 questions \--- Private tests: 1. Coding question: One question about what caused the issue, plus 1,200 lines of C++ code. Both passed, however ot2 is not as reliable as QwQ at solving this issue. It could give wrong answer during multi-shots, unlike qwq which always give the right answer. 2. Restructuring a financial spreadsheet. Both passed. \--- Conclusion: I prefer OpenThinker2-32B over the original R1-distill-32B from DS, especially because it never fell into an infinite loop during testing. I tested those five reasoning questions three times on OT2, and it never fell into a loop, unlike the R1-distill model. Which is quite an achievement considering they open-sourced their dataset and their distillation dataset is not much larger than DS's (1M vs 800k). However, it still falls behind QwQ-32B, which uses RL instead. \--- Settings I used for both models: [https://imgur.com/a/7ZBQ6SX](https://imgur.com/a/7ZBQ6SX) gguf: [https://huggingface.co/bartowski/Qwen\_QwQ-32B-GGUF/blob/main/Qwen\_QwQ-32B-IQ4\_XS.gguf](https://huggingface.co/bartowski/Qwen_QwQ-32B-GGUF/blob/main/Qwen_QwQ-32B-IQ4_XS.gguf) [https://huggingface.co/bartowski/open-thoughts\_OpenThinker2-32B-GGUF/blob/main/open-thoughts\_OpenThinker2-32B-IQ4\_XS.gguf](https://huggingface.co/bartowski/open-thoughts_OpenThinker2-32B-GGUF/blob/main/open-thoughts_OpenThinker2-32B-IQ4_XS.gguf) backend: ollama source of public questions: [https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1\_32b\_is\_be\_worse\_than\_qwq\_32b\_tests\_included/](https://www.reddit.com/r/LocalLLaMA/comments/1i65599/r1_32b_is_be_worse_than_qwq_32b_tests_included/) [https://www.reddit.com/r/LocalLLaMA/comments/1jpr1nk/the\_candle\_test\_most\_llms\_fail\_to\_generalise\_at/](https://www.reddit.com/r/LocalLLaMA/comments/1jpr1nk/the_candle_test_most_llms_fail_to_generalise_at/)
2025-04-05T11:04:54
https://www.reddit.com/r/LocalLLaMA/comments/1js0zmd/quick_comparison_of_qwq_and_openthinker2_32b/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js0zmd
false
null
t3_1js0zmd
/r/LocalLLaMA/comments/1js0zmd/quick_comparison_of_qwq_and_openthinker2_32b/
false
false
self
67
{'enabled': False, 'images': [{'id': '2C9TGCiywnyjRdeoLQfViWWnH26XZUE6iTYhzThJtIE', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?width=108&crop=smart&auto=webp&s=4c076c4edce824a9e45bccab592d6df79a6421a6', 'width': 108}, {'height': 263, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?width=216&crop=smart&auto=webp&s=f64a81772e58721af14d028000e1346a2bd6006a', 'width': 216}, {'height': 389, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?width=320&crop=smart&auto=webp&s=41c61f2df5be37f5ba41d23fdc3dd2a8515dbc75', 'width': 320}, {'height': 779, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?width=640&crop=smart&auto=webp&s=4745be883d1b14334e8280cad3adbb0f8b573d9d', 'width': 640}, {'height': 1168, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?width=960&crop=smart&auto=webp&s=6d829f12a4d44eccd4881067e27bc8863d8f86f4', 'width': 960}], 'source': {'height': 1231, 'url': 'https://external-preview.redd.it/oZT-NTbMIDuu1k0Psts_32BZ2VuLra52OP6MmF-qDqY.jpg?auto=webp&s=a2d3da163b63b05c9122675e781e060139245c0f', 'width': 1011}, 'variants': {}}]}
Coding agents?
15
Hi guys, would like to know what you use for local coding, I tried few months ago cline with qwen2.5 coder (4x3090). Are there better options now? Another dumb question: is there a simple way to connect an agentic workflow (crewai, autogen…) to a tool like cline, aider etc.?
2025-04-05T11:14:38
https://www.reddit.com/r/LocalLLaMA/comments/1js14u2/coding_agents/
Leflakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js14u2
false
null
t3_1js14u2
/r/LocalLLaMA/comments/1js14u2/coding_agents/
false
false
self
15
null
Open Source GPT-4o Model Incoming?
0
Open Router [announced their first "stealth" model on X](https://x.com/OpenRouterAI/status/1907870610602275203). "It’s a prerelease of an upcoming long-context foundation model from one of the model labs". So maybe the foundation 4o model or 4o-Turbo. The results pretty cracked. Sam Altman [said it was coming a few days ago](https://x.com/sama/status/1906793591944646898). https://preview.redd.it/avm6n7u040te1.png?width=424&format=png&auto=webp&s=20524f7da599cab5e0354a71c33416c6dce9b339
2025-04-05T11:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1js1a8e/open_source_gpt4o_model_incoming/
The_Ace_72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js1a8e
false
null
t3_1js1a8e
/r/LocalLLaMA/comments/1js1a8e/open_source_gpt4o_model_incoming/
false
false
https://b.thumbs.redditm…tgsubdcnAAfM.jpg
0
{'enabled': False, 'images': [{'id': '_uQdW9f_wZVTtT3LLHNfrOfxm8jAWXPfvaxG7V4GdyA', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=108&crop=smart&auto=webp&s=0a6b9d7f22cd045540a79d6b365c1723bf0130ac', 'width': 108}, {'height': 53, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=216&crop=smart&auto=webp&s=87bbb735d0a7b03cad09baae1b4d086606504fd7', 'width': 216}, {'height': 79, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=320&crop=smart&auto=webp&s=a2173db246cfdb13ea4aac0a3b9f4043a69d8e92', 'width': 320}, {'height': 158, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=640&crop=smart&auto=webp&s=69e4643321c3330af24a3416580631beec32e634', 'width': 640}, {'height': 237, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=960&crop=smart&auto=webp&s=18ff63722b5c42bcb5d849a991b262180b4455bb', 'width': 960}, {'height': 267, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?width=1080&crop=smart&auto=webp&s=92909e89a7cbcdd5c7832a2f26ad60eaef8e7a87', 'width': 1080}], 'source': {'height': 507, 'url': 'https://external-preview.redd.it/2WiitUvCb2kPEVq18LHPyO0p_EJqJTgnFt8WdXoO8Nk.jpg?auto=webp&s=c808d62a8bac74d73cdf75e1041b4ce2f7836660', 'width': 2047}, 'variants': {}}]}
nsfw orpheus tts?!
1
[removed]
2025-04-05T11:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1js1d3j/nsfw_orpheus_tts/
MrAlienOverLord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js1d3j
false
null
t3_1js1d3j
/r/LocalLLaMA/comments/1js1d3j/nsfw_orpheus_tts/
false
false
nsfw
1
null
Survey paper on LLM on Mobile or IoT devices?
1
[removed]
2025-04-05T11:39:35
https://www.reddit.com/r/LocalLLaMA/comments/1js1is3/survey_paper_on_llm_on_mobile_or_iot_devices/
Maxim_Leon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js1is3
false
null
t3_1js1is3
/r/LocalLLaMA/comments/1js1is3/survey_paper_on_llm_on_mobile_or_iot_devices/
false
false
self
1
null
Absolute clown blames DeepSeek not tariffs on sharp market decline and global chaos
68
2025-04-05T12:36:11
https://i.redd.it/a64t3akqg0te1.png
shokuninstudio
i.redd.it
1970-01-01T00:00:00
0
{}
1js2gve
false
null
t3_1js2gve
/r/LocalLLaMA/comments/1js2gve/absolute_clown_blames_deepseek_not_tariffs_on/
false
false
https://b.thumbs.redditm…J8t93qI8JwNY.jpg
68
{'enabled': True, 'images': [{'id': 'Pp6TFoz9_ZUPbO9HNo0JtRfKVBJdCpDt4WEarCJxk0A', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=108&crop=smart&auto=webp&s=31cb68e1a2409b2ce3c8fe388a7de89e457c4181', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=216&crop=smart&auto=webp&s=4481da7a27fd565d47db727d69ff5a8254c3a68d', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=320&crop=smart&auto=webp&s=60805d76f1161c901cd49359d37497d178d1fd61', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=640&crop=smart&auto=webp&s=6b3f3693f6e893586656650190d358e4baa758a3', 'width': 640}, {'height': 1219, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=960&crop=smart&auto=webp&s=58ed8f656a0fccf0611a3a6d53b664720488938c', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?width=1080&crop=smart&auto=webp&s=b64b1881cd4604bb03b84260ddf4e4ca1df5c29e', 'width': 1080}], 'source': {'height': 1446, 'url': 'https://preview.redd.it/a64t3akqg0te1.png?auto=webp&s=da880508416161d6559af3116f95b8cd3e0eda3a', 'width': 1138}, 'variants': {}}]}
Gemma3 licence
15
Please explain to me like I'm 5 years old. What's wrong with their licence and what can I use it for? What is forbidden? Thank you.
2025-04-05T13:07:06
https://www.reddit.com/r/LocalLLaMA/comments/1js31db/gemma3_licence/
Royal_Light_9921
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js31db
false
null
t3_1js31db
/r/LocalLLaMA/comments/1js31db/gemma3_licence/
false
false
self
15
null
Karamaru - An "Edo period" LLM trained on 17th-19th century japanese literature.
134
I saw this a few days ago where a researcher from Sakana AI continually pretrained a Llama-3 Elyza 8B model on classical japanese literature. What's cool about is that it builds towards an idea that's been brewing on my mind and evidently a lot of other people here, > A model that's able to be a Time-travelling subject matter expert. Links: Researcher's tweet: https://x.com/tkasasagi/status/1907998360713441571?t=PGhYyaVJQtf0k37l-9zXiA&s=19 Huggingface: Model: https://huggingface.co/SakanaAI/Llama-3-Karamaru-v1 Space: https://huggingface.co/spaces/SakanaAI/Llama-3-Karamaru-v1
2025-04-05T13:09:39
https://sakana.ai/karamaru/
nomad_lw
sakana.ai
1970-01-01T00:00:00
0
{}
1js335l
false
null
t3_1js335l
/r/LocalLLaMA/comments/1js335l/karamaru_an_edo_period_llm_trained_on_17th19th/
false
false
https://a.thumbs.redditm…CXkcDXN3uzz4.jpg
134
{'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=108&crop=smart&auto=webp&s=61f7124235d3c9cc17267eb2ed7de46bab49765e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=216&crop=smart&auto=webp&s=b01c782fa93b021a180dc44d7151fade86d6431d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=320&crop=smart&auto=webp&s=670eb9c9058d14ac8846a6475e3d47cb616cf011', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=640&crop=smart&auto=webp&s=f5f30bf0b3bae15b4dee53ba7bd37f2486072c04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=960&crop=smart&auto=webp&s=2c1d1a6c85eb92a670807f829ec7254dc53f1bd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=1080&crop=smart&auto=webp&s=344e6dcc7b48a81d3b6727c749b0c289aabe5547', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?auto=webp&s=efb765a9e5d3d5585101bc98246d9babdd7d3105', 'width': 1600}, 'variants': {}}]}
Stop overthinking
1
2025-04-05T14:17:35
https://v.redd.it/wv0zgduoy0te1
Connect_Honeydew_707
/r/LocalLLaMA/comments/1js4g9s/stop_overthinking/
1970-01-01T00:00:00
0
{}
1js4g9s
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/wv0zgduoy0te1/DASHPlaylist.mpd?a=1746594676%2CMGQ5NzY2YTM2OGY0NzQ4YTZiMzYzNzQxMTk0YzNmYzQxMTk1MWRhNDY5NzE5YTEyNTU3MDcyMDc0MGJjNjE5MA%3D%3D&v=1&f=sd', 'duration': 278, 'fallback_url': 'https://v.redd.it/wv0zgduoy0te1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/wv0zgduoy0te1/HLSPlaylist.m3u8?a=1746594676%2COTdkNWIyOTUwY2Q0MWFhNzA0ZDM3YTcyYTJhY2Q0ZWM0NjJkZjIyMDc4OWMyNGYwZWFkYjA2NDIzNjhiYzQxNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wv0zgduoy0te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
t3_1js4g9s
/r/LocalLLaMA/comments/1js4g9s/stop_overthinking/
false
false
https://external-preview…cd960fad0a41cdab
1
{'enabled': False, 'images': [{'id': 'a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?width=108&crop=smart&format=pjpg&auto=webp&s=aac959a39c148ab04a0a6741251ab930c3226d9c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c3244a53d8a7170937474d421f2ddede10cee98', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?width=320&crop=smart&format=pjpg&auto=webp&s=77df404a3a3f75511c97fa208eac9ca9c9198de0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?width=640&crop=smart&format=pjpg&auto=webp&s=64a9980b920fc2606b4dc7f29a0be4ec5bb64907', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?width=960&crop=smart&format=pjpg&auto=webp&s=969ff05e31a38f8c485ef601b89391efbdcf7ab4', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/a3pibWc2em95MHRlMU3o_urjZQ1DT7ZksyWCP7ocOb3gpRFdJGjiKetSIpCU.png?format=pjpg&auto=webp&s=3d9303ff90dae3408b538f87e3d693e103b9fa00', 'width': 1024}, 'variants': {}}]}
I think I overdid it.
581
2025-04-05T14:21:22
https://i.redd.it/i5f8b0knz0te1.jpeg
_supert_
i.redd.it
1970-01-01T00:00:00
0
{}
1js4iy0
false
null
t3_1js4iy0
/r/LocalLLaMA/comments/1js4iy0/i_think_i_overdid_it/
false
false
https://b.thumbs.redditm…my4f3nYFZrXU.jpg
581
{'enabled': True, 'images': [{'id': '1ib0QyVQdoL_Kp_Ek-6c1_iYkpCNpfTblLYZxzwlP1A', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=108&crop=smart&auto=webp&s=f64e0026d47c2e9b4566747ecdc6651419cd0caf', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=216&crop=smart&auto=webp&s=3fec643dbea3187bf84493cbadb8be0264880f8c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=320&crop=smart&auto=webp&s=ec433940e097b3ad5d44b2ce488754a7d89fa0af', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=640&crop=smart&auto=webp&s=1448cae5bed745aa96ac7b2801a7bf32c07afd26', 'width': 640}, {'height': 722, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=960&crop=smart&auto=webp&s=141ce955f62020c6bc69df07004495880d461a6a', 'width': 960}, {'height': 813, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?width=1080&crop=smart&auto=webp&s=98982b7d04c4bb7dc44262e6de89854e920cfdce', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/i5f8b0knz0te1.jpeg?auto=webp&s=e7ec405982a91cfa139b46ee023b0895892847c8', 'width': 4080}, 'variants': {}}]}
Crystal model
1
It seems to be like having a personality feels different is meta cooking this time or not ?
2025-04-05T14:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1js4lez/crystal_model/
Independent-Wind4462
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js4lez
false
null
t3_1js4lez
/r/LocalLLaMA/comments/1js4lez/crystal_model/
false
false
self
1
null
Is RL needs is a small amount of data to train multiple episodes?
1
[removed]
2025-04-05T14:25:08
https://www.reddit.com/r/LocalLLaMA/comments/1js4lsh/is_rl_needs_is_a_small_amount_of_data_to_train/
Important-Life-5267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js4lsh
false
null
t3_1js4lsh
/r/LocalLLaMA/comments/1js4lsh/is_rl_needs_is_a_small_amount_of_data_to_train/
false
false
https://b.thumbs.redditm…TAnBalGsNK3Q.jpg
1
{'enabled': False, 'images': [{'id': 'K2ghMcoz9mhZzLKlHVlkuplwlTLFw6B-wmoDa9AY3C0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=108&crop=smart&auto=webp&s=15fcd0fb948d21409e4602f2687a2639d8633f81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=216&crop=smart&auto=webp&s=e35b2830cc38da56469e89a677cce9b550a28568', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=320&crop=smart&auto=webp&s=f1b8a43259dfa62b0af85a203f43613c0036aea7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=640&crop=smart&auto=webp&s=496d1b77f097160a32d677566ba4a4696609bfee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=960&crop=smart&auto=webp&s=bac97d1f26ff9f4c3b75a1b4330e9e6ed7703285', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=1080&crop=smart&auto=webp&s=25e611495bb334c24f0fc033f1a54275c7ff9c10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?auto=webp&s=cb8b9a0d63ad5b99a527adb7698ab6d233d03c86', 'width': 1200}, 'variants': {}}]}
Alternative to huggingchat for chatting with models
0
Hello there. So I love Huggingchat but its implementation of Gemma3 crashes after a while and doesn't respond. Are there alternatives (free and unlimited access to a selection of models) whose implementation of Gemma3 is better? Thanks
2025-04-05T14:25:40
https://www.reddit.com/r/LocalLLaMA/comments/1js4m77/alternative_to_huggingchat_for_chatting_with/
Silver-Champion-4846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js4m77
false
null
t3_1js4m77
/r/LocalLLaMA/comments/1js4m77/alternative_to_huggingchat_for_chatting_with/
false
false
self
0
null
Is RL needs is a small amount of data to train multiple episodes?
1
[removed]
2025-04-05T14:26:08
https://www.reddit.com/r/LocalLLaMA/comments/1js4mji/is_rl_needs_is_a_small_amount_of_data_to_train/
Important-Life-5267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js4mji
false
null
t3_1js4mji
/r/LocalLLaMA/comments/1js4mji/is_rl_needs_is_a_small_amount_of_data_to_train/
false
false
https://b.thumbs.redditm…TAnBalGsNK3Q.jpg
1
{'enabled': False, 'images': [{'id': 'K2ghMcoz9mhZzLKlHVlkuplwlTLFw6B-wmoDa9AY3C0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=108&crop=smart&auto=webp&s=15fcd0fb948d21409e4602f2687a2639d8633f81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=216&crop=smart&auto=webp&s=e35b2830cc38da56469e89a677cce9b550a28568', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=320&crop=smart&auto=webp&s=f1b8a43259dfa62b0af85a203f43613c0036aea7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=640&crop=smart&auto=webp&s=496d1b77f097160a32d677566ba4a4696609bfee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=960&crop=smart&auto=webp&s=bac97d1f26ff9f4c3b75a1b4330e9e6ed7703285', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?width=1080&crop=smart&auto=webp&s=25e611495bb334c24f0fc033f1a54275c7ff9c10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gSW_QxKfEZf7NulhP8H76PHxaN_ZMmee2vEotUE5Di4.jpg?auto=webp&s=cb8b9a0d63ad5b99a527adb7698ab6d233d03c86', 'width': 1200}, 'variants': {}}]}
Is RL needs is a small amount of data to train multiple episodes?
1
[removed]
2025-04-05T14:49:04
https://www.reddit.com/r/LocalLLaMA/comments/1js54kq/is_rl_needs_is_a_small_amount_of_data_to_train/
External-Scratch2302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js54kq
false
null
t3_1js54kq
/r/LocalLLaMA/comments/1js54kq/is_rl_needs_is_a_small_amount_of_data_to_train/
false
false
https://a.thumbs.redditm…OvHvt0u8Q7V0.jpg
1
{'enabled': False, 'images': [{'id': 'cg4b2uZTQrZ2gxvk5tVo9munj_72Fv-Mp8VQTXy76js', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=108&crop=smart&auto=webp&s=1dedaa18049c4638d036ba729eb9c1aadc215f25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=216&crop=smart&auto=webp&s=0cd4b603d58e9fec446c51d33258a9c21e1e01d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=320&crop=smart&auto=webp&s=6771cabcaf75fb964850d72506183e45ad74ebd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=640&crop=smart&auto=webp&s=954066834018ad85a45ebf3b2268af124b4ee1dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=960&crop=smart&auto=webp&s=b5c12c14fef3a103d5fc50d22b7cc44f67546135', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?width=1080&crop=smart&auto=webp&s=bdef4a512601acf98e09888c1a5ec7858378a571', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/00wmLEVcANsS7vIbiQkxoQBZqurRfSbSIKV-coPmmIw.jpg?auto=webp&s=009c35991b915421811ce4c99ae1efc91ce6b574', 'width': 1200}, 'variants': {}}]}
Has anyone made a GPU cluster using 5090s? Given numerous reports about their ports burning at high power usage, and given that Nvidia removed load balancing in them, is 5090 a viable option (instead of 3090/4090)?
1
2025-04-05T14:52:34
https://i.redd.it/kuhi0ch651te1.png
nderstand2grow
i.redd.it
1970-01-01T00:00:00
0
{}
1js57cb
false
null
t3_1js57cb
/r/LocalLLaMA/comments/1js57cb/has_anyone_made_a_gpu_cluster_using_5090s_given/
false
false
https://b.thumbs.redditm…y3uoBf4RJKRI.jpg
1
{'enabled': True, 'images': [{'id': 'ApCTNlOco5oQ1gOq7ajkR_T8mgZOBxvYOKEkfj4o7PM', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=108&crop=smart&auto=webp&s=d41731c1fee3d26155836e59ebda7853abfb519a', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=216&crop=smart&auto=webp&s=b5ed54c80f801ab8865052140ed7bae3faa8a64a', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=320&crop=smart&auto=webp&s=16e3f16a82755375c90da1f6d9f1ecd6641ed832', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=640&crop=smart&auto=webp&s=286417ce4328732d0bfcf55fc15acf86bb06cd10', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=960&crop=smart&auto=webp&s=aabc00657e870138c3c0657cb6abfdb81795bef5', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?width=1080&crop=smart&auto=webp&s=708a86edb0f28e97d4218f0183cad8acd51d0084', 'width': 1080}], 'source': {'height': 624, 'url': 'https://preview.redd.it/kuhi0ch651te1.png?auto=webp&s=55b70f0819d310fc030d6bedc246630e58b03486', 'width': 1200}, 'variants': {}}]}
Has anyone made a GPU cluster using 5090s? Given numerous reports about their ports burning at high power usage, and given that Nvidia removed load balancing in them, is 5090 a viable option (instead of 3090/4090)?
1
2025-04-05T14:53:14
https://i.redd.it/bk0njj6c51te1.png
nderstand2grow
i.redd.it
1970-01-01T00:00:00
0
{}
1js57ul
false
null
t3_1js57ul
/r/LocalLLaMA/comments/1js57ul/has_anyone_made_a_gpu_cluster_using_5090s_given/
false
false
https://a.thumbs.redditm…eE2zf9XxT330.jpg
1
{'enabled': True, 'images': [{'id': 'GX_gfL_-4JfFyHx9UI3chskohggQGwb6CAT3FUF2iYA', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=108&crop=smart&auto=webp&s=cec7d0c41afbda12d01184e67077d3ff2a30fe92', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=216&crop=smart&auto=webp&s=cddecd43c2b210e4c4607c97309baec53a678acc', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=320&crop=smart&auto=webp&s=b93c4231f57b9ec62b549eb181f82d1eaad10f96', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=640&crop=smart&auto=webp&s=8be6364f2beae54f73ebba17fba044a5f3b46730', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=960&crop=smart&auto=webp&s=c76c63fde1a820e3c3d12f98094db8f71da0197f', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?width=1080&crop=smart&auto=webp&s=125a4706b8e55500ed3fca5b9cd33f92343baf4a', 'width': 1080}], 'source': {'height': 624, 'url': 'https://preview.redd.it/bk0njj6c51te1.png?auto=webp&s=48ef87d24c3e0ee8296604c842a35a9ca32ad998', 'width': 1200}, 'variants': {}}]}
Has anyone made a GPU cluster using 5090s? Given numerous reports about their ports burning at high power usage, and given that Nvidia removed load balancing in them, is 5090 a viable option (instead of 3090/4090)?
1
[removed]
2025-04-05T14:54:40
https://www.reddit.com/r/LocalLLaMA/comments/1js5909/has_anyone_made_a_gpu_cluster_using_5090s_given/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5909
false
null
t3_1js5909
/r/LocalLLaMA/comments/1js5909/has_anyone_made_a_gpu_cluster_using_5090s_given/
false
false
https://a.thumbs.redditm…1b355Q8_2Cw4.jpg
1
null
Larger context or Chunking? [ Rookie ]
1
Hey, \[I'm new to this world so I'll probably make rookie's mistakes\] I want to fine tune a model for retrieval, the documents I want it to 'learn' have different sizes (some are a dozen of lines, while others or m and they are in Italian. Those are legal texts so precision is a very important part of the result I'd like to obtain. What technique should I use? I saw that two option in my case should be 'overlapping' and chunking, is there a better one in my case?
2025-04-05T14:58:14
https://www.reddit.com/r/LocalLLaMA/comments/1js5btx/larger_context_or_chunking_rookie/
Foreign_Lead_3582
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5btx
false
null
t3_1js5btx
/r/LocalLLaMA/comments/1js5btx/larger_context_or_chunking_rookie/
false
false
self
1
null
Whatever Quasar Alpha is, it's excellent at translation
0
2025-04-05T15:05:08
https://nuenki.app/blog/quasar_alpha_stats
Nuenki
nuenki.app
1970-01-01T00:00:00
0
{}
1js5hir
false
null
t3_1js5hir
/r/LocalLLaMA/comments/1js5hir/whatever_quasar_alpha_is_its_excellent_at/
false
false
https://b.thumbs.redditm…dHLwbirrCGwM.jpg
0
{'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=108&crop=smart&auto=webp&s=79a054dd227c6f5432f86d0aad2f733d56deb387', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=216&crop=smart&auto=webp&s=36d6fa0f550c1aa87b8842476a42ab5e7983d775', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=320&crop=smart&auto=webp&s=a9e59b0b9832d1d263060216c6712ab86736cf73', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=640&crop=smart&auto=webp&s=33bb3dd09e1348f194cfb304ced2dd662da82a0f', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=960&crop=smart&auto=webp&s=b61c5111ebbc99dd0da8775eb45acd9ee039349d', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=1080&crop=smart&auto=webp&s=c4871bfcc51572f134a18d7c42ca6e7ba566fac5', 'width': 1080}], 'source': {'height': 2096, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?auto=webp&s=068fb20ca78df0694ec410b05a2982f47c0ae5d0', 'width': 3811}, 'variants': {}}]}
MCP Servers: The New Security Nightmare
1
[removed]
2025-04-05T15:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1js5ixs/mcp_servers_the_new_security_nightmare/
Ok_Address_5158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5ixs
false
null
t3_1js5ixs
/r/LocalLLaMA/comments/1js5ixs/mcp_servers_the_new_security_nightmare/
false
false
self
1
{'enabled': False, 'images': [{'id': 'x012-XwFXFYVNP_gZRBHKi9jlfCxZvfVC-hotYENDck', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=108&crop=smart&auto=webp&s=0f4beb45bf020b823768b66699499fb5947b31d1', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=216&crop=smart&auto=webp&s=c9831ea3a897e5be1c13a91005760b5a0ff71b98', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=320&crop=smart&auto=webp&s=ef4c362838cfc5e3650ca33991bc98979cc29468', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=640&crop=smart&auto=webp&s=d86c7f05fb912bce4eb749cee33bc02b31a123f8', 'width': 640}, {'height': 422, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=960&crop=smart&auto=webp&s=05a525391af4b5f44a0846d7d3ec90628fa556d0', 'width': 960}, {'height': 474, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=1080&crop=smart&auto=webp&s=a7d0a3c26511e02123aac75e510b3e3114457d70', 'width': 1080}], 'source': {'height': 524, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?auto=webp&s=d95311aea7d2fe4186878163219dce5b12a17515', 'width': 1192}, 'variants': {}}]}
Launching Arrakis: Open-source, self-hostable sandboxing service for AI Agents
1
[removed]
2025-04-05T15:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1js5m9j/launching_arrakis_opensource_selfhostable/
abshkbh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5m9j
false
null
t3_1js5m9j
/r/LocalLLaMA/comments/1js5m9j/launching_arrakis_opensource_selfhostable/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tEko882e-KjfAddcGNuNVFlZ4XZiV_u7d6ACVwx5Odc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=108&crop=smart&auto=webp&s=996eccc59c2b8f97ff6ae8c608af31717eea4206', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=216&crop=smart&auto=webp&s=c96abe54853458c8e480a70788b6872e7dffc515', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=320&crop=smart&auto=webp&s=1eab52b7263ce301df71d486978af0c55196c73d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=640&crop=smart&auto=webp&s=2fbd8b1d9474c046f3cafa9c2f231af1dfb72b4c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=960&crop=smart&auto=webp&s=0427a27593d24a6bffacf10615f39b39bbbde578', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?width=1080&crop=smart&auto=webp&s=cdb3479c1302f39ed87a09a42108394b98db8a6e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/mrMtcp6urD5KRgDNPdD6JGfYWL9vg0nbMyeQVK6n-98.jpg?auto=webp&s=4725558159fa4ac101355e97ecb9ac73fa2429f0', 'width': 1920}, 'variants': {}}]}
Best Models for TypeScript Coding
1
I'm building in TypeScript using WebStorm and the ProxyAI plugin. I've been trying qwq and so far been pretty underwhelmed by how much it hallucinates. I've been using Llama 3.2-vision 90B, which seems to be quite a bit better but not a focussed coding model. Any recommendations? Running on MacBook Pro M4 Max/40 core GPU/128 GB RAM.
2025-04-05T15:23:18
https://www.reddit.com/r/LocalLLaMA/comments/1js5vls/best_models_for_typescript_coding/
taco-prophet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5vls
false
null
t3_1js5vls
/r/LocalLLaMA/comments/1js5vls/best_models_for_typescript_coding/
false
false
self
1
null
AMD mi325x (8x) deployment and tests.
30
Hey Locallama cool people i am back again with new posts after [amd\_mi300x(8x)\_deployment\_and\_tests](https://www.reddit.com/r/LocalLLaMA/comments/1it46dv/amd_mi300x_deployment_and_tests/) i will be soon be getting access to 8 x mi325x all connected by infinity fabric and yes 96 cores 2TB ram (the usual). let me know what are you guys curious to actually test on it and i will try fulfilling every request as much as possible. from single model single gpu to multi model single gpu or even deploying r1 and v3 deploying in a single instance.
2025-04-05T15:23:38
https://www.reddit.com/r/LocalLLaMA/comments/1js5vwm/amd_mi325x_8x_deployment_and_tests/
Shivacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js5vwm
false
null
t3_1js5vwm
/r/LocalLLaMA/comments/1js5vwm/amd_mi325x_8x_deployment_and_tests/
false
false
self
30
null
Working on an open alternative to Claude Code
1
[removed]
2025-04-05T15:39:32
https://www.reddit.com/r/LocalLLaMA/comments/1js68bt/working_on_an_open_alternative_to_claude_code/
benhmoore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js68bt
false
null
t3_1js68bt
/r/LocalLLaMA/comments/1js68bt/working_on_an_open_alternative_to_claude_code/
false
false
self
1
{'enabled': False, 'images': [{'id': 'aBHfHlVOOyWN4-POTXA5ab4X-ljzHcomKIWUkZ7nRhQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=108&crop=smart&auto=webp&s=548ade65f74376a35faa431564a4b2b058b4d694', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=216&crop=smart&auto=webp&s=3c23a04d9814f1ce419fef111e7d7629704dffb3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=320&crop=smart&auto=webp&s=a090b4f45227cffd2c9740afa34bd6bcbea567ae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=640&crop=smart&auto=webp&s=f415462ce1aa33efa6ae1f1d6d63149944c7dc36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=960&crop=smart&auto=webp&s=02a0025fecf7929ac5f949cc2321ab762359897b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?width=1080&crop=smart&auto=webp&s=cc8d32d42da9ad60fe1a628916ea0ef91ff5a6a4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EjD2XJUc57OkDnbkVxg-OZkrqIg7KNrywEIZCfCxWjA.jpg?auto=webp&s=cd3ea372e944047ad6d96d8bd575ef1bed9abf65', 'width': 1200}, 'variants': {}}]}
MM-EUREKA-Qwen: Is RL needs is a small amount of data to train multiple episodes?
1
[removed]
2025-04-05T15:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1js6a25/mmeurekaqwen_is_rl_needs_is_a_small_amount_of/
Shot_Ad9689
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js6a25
false
null
t3_1js6a25
/r/LocalLLaMA/comments/1js6a25/mmeurekaqwen_is_rl_needs_is_a_small_amount_of/
false
false
https://b.thumbs.redditm…ilUezrlz3x_E.jpg
1
null
MM-EUREKA-Qwen: Is RL needs is a small amount of data to train multiple episodes?
1
2025-04-05T15:48:40
https://jagged-court-d9d.notion.site/MM-Eureka-Qwen-1c13cc5a384880ffbd2de24e1dee052d
Shot_Ad9689
jagged-court-d9d.notion.site
1970-01-01T00:00:00
0
{}
1js6fig
false
null
t3_1js6fig
/r/LocalLLaMA/comments/1js6fig/mmeurekaqwen_is_rl_needs_is_a_small_amount_of/
false
false
https://b.thumbs.redditm…pCyc4PEWcZic.jpg
1
{'enabled': False, 'images': [{'id': 'YxpamrHP7dC060e9PJgtGzieAJ5ztbvZvxLDo5BiiRo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=108&crop=smart&auto=webp&s=863349d2444ff2c08ec6ac3140fa614e63fbbd17', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=216&crop=smart&auto=webp&s=ca604dec05096eca82281d46e4fd6e733ed6178e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=320&crop=smart&auto=webp&s=d44682b09fbc29e07445a3b6866a27900eec8b2f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=640&crop=smart&auto=webp&s=258ecd7d26bce0f83f9e26ad8852593ea43aa751', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=960&crop=smart&auto=webp&s=376a0e7e93bb66151bf2d7aec4d5f5919e7717db', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?width=1080&crop=smart&auto=webp&s=aa455bdb809e710a097f0f1e72cfa953f618c5a7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/eTurNcfo7Aeq3lwW8x0WTR-HRg4LVpqY32hz6ZUU5Dk.jpg?auto=webp&s=d74363b764ecd95975664299f16a2fb652836c4d', 'width': 1200}, 'variants': {}}]}
I got a dual 3090... What the fuck do I do? if I run it max capacity (training) it will cost me 1-2k in electricity per year...
0
https://preview.redd.it/…e0166613a0013e
2025-04-05T16:12:53
https://www.reddit.com/r/LocalLLaMA/comments/1js6ywy/i_got_a_dual_3090_what_the_fuck_do_i_do_if_i_run/
Autumnlight_02
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js6ywy
false
null
t3_1js6ywy
/r/LocalLLaMA/comments/1js6ywy/i_got_a_dual_3090_what_the_fuck_do_i_do_if_i_run/
false
false
https://a.thumbs.redditm…ZX-770lVB_b0.jpg
0
null
I built an AI Orchestrator that routes between local and cloud models based on real-time signals like battery, latency, and data sensitivity — and it's fully pluggable.
1
[removed]
2025-04-05T16:14:06
https://www.reddit.com/r/LocalLLaMA/comments/1js6zx7/i_built_an_ai_orchestrator_that_routes_between/
Emotional-Evening-62
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js6zx7
false
null
t3_1js6zx7
/r/LocalLLaMA/comments/1js6zx7/i_built_an_ai_orchestrator_that_routes_between/
false
false
self
1
null
SoftWhisper April 2025 out – automated transcription now with speaker identification!
52
Hello, my dear Github friends, It is with great joy that I announce that SoftWhisper April 2025 is out – now with speaker identification (diarization)! https://preview.redd.it/abqqf3ppk1te1.png?width=2004&format=png&auto=webp&s=56e424822faf90f873e52f914818aaaf81412581 \### A tricky feature Originally, I wanted to implement diarization with Pyannote, but because APIs are usually not widelly documented, not only learning how to use them, but also how effective they are for the project, is a bit difficult. Identifying speakers is still somewhat primitive even with state-of-the-art solutions. Usually, the best results are achieved with fine-tuned models and controlled conditions (for example, two speakers in studio recordings). The crux of the matter is: not only do we require a lot of money to create those specialized models, but they are incredibly hard to use. That does not align with my vision of having something that works reasonably well and is easy to setup, so I did a few tests with 3-4 different approaches. \### A balanced compromise After careful testing, I believe inaSpeechSegmenter will provide our users the best balance between usability and accuracy: it's fast, identifies speakers to a more or less consistent degree out of the box, and does not require a complicated setup. Give it a try! \### Known issues Please note: while speaker identification is more or less consistent, the current approach is still not perfect and will sometimes not identify cross speech or add more speakers than present in the audio, so manual review is still needed. This feature is provided with the hopes to make diarization easier, not a solved problem. \#### Increased loading times Also keep in mind that the current diarization solution will increase the loading times slightly and if you select diarization, computation will also increase. Please be patient. \#### Other bugfixes This release also fixes a few other bugs, namely that the exported content sometimes would not match the content in the textbox.
2025-04-05T16:20:24
https://www.reddit.com/r/LocalLLaMA/comments/1js7559/softwhisper_april_2025_out_automated/
Substantial_Swan_144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js7559
false
null
t3_1js7559
/r/LocalLLaMA/comments/1js7559/softwhisper_april_2025_out_automated/
false
false
https://a.thumbs.redditm…aUutlfMshVH8.jpg
52
{'enabled': False, 'images': [{'id': '2AnOZMCJX6x01FG5EfXMTiTfMstpAg2taRXg1T4__Us', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=108&crop=smart&auto=webp&s=7b1c6c561f498584ff017c7e4c8ebbeab5281c1e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=216&crop=smart&auto=webp&s=82f71c14fbc31e9b521ac37596bc2d8927a3e008', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=320&crop=smart&auto=webp&s=33d026665221a4488a786f8ee5ebd598ce03679b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=640&crop=smart&auto=webp&s=be2b5855980dea1dc493e754a13fc957206fc763', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=960&crop=smart&auto=webp&s=06f5fb5261058aa0c540f03ef1dd8b54d97a68e1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?width=1080&crop=smart&auto=webp&s=e9a761f6feaf11ef07305c7c02a6d6518ea1ef24', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/j1LhVFNpOOLKVH84GYEqyz2i4Q75slwPOEiPhrZyFBc.jpg?auto=webp&s=3d6ade040e77dbdea3b4780996e84bc430b181b8', 'width': 1200}, 'variants': {}}]}
Feedback and reviews needed for my Llama.cpp-based AI Chat App with RAG, Wikipedia search, and Role-playing features
1
[removed]
2025-04-05T16:26:01
https://www.reddit.com/r/LocalLLaMA/comments/1js79q8/feedback_and_reviews_needed_for_my_llamacppbased/
DAIapp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js79q8
false
null
t3_1js79q8
/r/LocalLLaMA/comments/1js79q8/feedback_and_reviews_needed_for_my_llamacppbased/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EgfX_IgBpXqPWLrawvqzPFmRAUrpFGD1cu5pnHI8-9o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cfgPH22OHrQGRCRpRKPkPjgGYSx8mfqZt3kFU8UvXLw.jpg?width=108&crop=smart&auto=webp&s=af81db0b1b3931adeb2dd3719d66a157270bd3fe', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cfgPH22OHrQGRCRpRKPkPjgGYSx8mfqZt3kFU8UvXLw.jpg?width=216&crop=smart&auto=webp&s=e85d96d09a5f540a35c8bb6c435eb94bfaf9c04a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cfgPH22OHrQGRCRpRKPkPjgGYSx8mfqZt3kFU8UvXLw.jpg?width=320&crop=smart&auto=webp&s=ff44c77e60e71181917e802a8881402ee38fbc0d', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/cfgPH22OHrQGRCRpRKPkPjgGYSx8mfqZt3kFU8UvXLw.jpg?auto=webp&s=ab756a93cca147312c8bd7f437e5ed717a44cec4', 'width': 512}, 'variants': {}}]}
Better approaches for building knowledge graphs from bulk unstructured data (like PDFs)
1
[removed]
2025-04-05T16:41:15
https://www.reddit.com/r/LocalLLaMA/comments/1js7m0s/better_approaches_for_building_knowledge_graphs/
bakaino_gai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js7m0s
false
null
t3_1js7m0s
/r/LocalLLaMA/comments/1js7m0s/better_approaches_for_building_knowledge_graphs/
false
false
self
1
null
Local LLM that answers to questions after reasoning by quoting Bible?
0
I would like to run local LLM that fits in 24gb vram and reasons with questions and answer those questions by quoting bible. Is there that kind of LLM? Or is it SLM in this case?
2025-04-05T16:44:57
https://www.reddit.com/r/LocalLLaMA/comments/1js7p1i/local_llm_that_answers_to_questions_after/
Maleficent_Age1577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js7p1i
false
null
t3_1js7p1i
/r/LocalLLaMA/comments/1js7p1i/local_llm_that_answers_to_questions_after/
false
false
self
0
null
Article reconstruction from multipage newspaper PDF
7
I am really not finding a decent way to do something which is so easy for us humans :( I have a large number of PDFs of an Italian newspaper most of which has accessible text in it but no tags to discern between a title, an author, a text body etc. Moreover especially articles from the first page, continue on later pages (the first part on the first page may have a "on page 9" hint on which page carries the continuation. I tried to post-processes the extracted text using AI language models (Claude, Gemini) via the OpenRouter API to intelligently correct OCR errors, fix formatting, replace character placeholders (CID codes), and normalize text flow but the results are really really bad :( Can anyone suggest a better worflow or better technologies? https://preview.redd.it/eegq0eq1r1te1.png?width=1244&format=png&auto=webp&s=f3160aa6be8b38cf02f3b60c23612545c1e02ded Here is just one screenshot of a first page. Of course the holy grail would be being able to reconstruct each article tagging the title, author and text of each even stitching back the articles that follow on subsequent pages.
2025-04-05T16:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1js7z84/article_reconstruction_from_multipage_newspaper/
olddoglearnsnewtrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js7z84
false
null
t3_1js7z84
/r/LocalLLaMA/comments/1js7z84/article_reconstruction_from_multipage_newspaper/
false
false
https://b.thumbs.redditm…fVUoiX21iRBY.jpg
7
null
plomp - python library for tracking context
2
Hi all, I wanted to share this very small python framework I created where you add some instrumentation to a program which uses LLMs and it generates HTML progress pages during execution. [https://github.com/michaelgiba/plomp](https://github.com/michaelgiba/plomp) I'm interested in projects like [https://github.com/lechmazur/elimination\_game/](https://github.com/lechmazur/elimination_game/) which are multi-model bennchmarks/simulations and it can be hard to debug which "character" can see what context for their decision making. I've been locally running with quantized Phi4 instances (via llama.cpp) competing against each other and this little tool made it easier to debug so I decided to split it out into its own project and share
2025-04-05T17:02:24
https://www.reddit.com/r/LocalLLaMA/comments/1js83ka/plomp_python_library_for_tracking_context/
Embarrassed_Towel_63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js83ka
false
null
t3_1js83ka
/r/LocalLLaMA/comments/1js83ka/plomp_python_library_for_tracking_context/
false
false
self
2
{'enabled': False, 'images': [{'id': 'j7RFLSz51-rxdQn0Z04N2f8Dqt9ZpMzigddhnPL0xbw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=108&crop=smart&auto=webp&s=1edcbcf12322fe76ab8d9c9c60f5ffc4c0fc55bc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=216&crop=smart&auto=webp&s=5b10f6c072bafba295c6bba4a7ea1d7536385149', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=320&crop=smart&auto=webp&s=514fb5ea3b43a27998ed411ac5e73609d8a48ffe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=640&crop=smart&auto=webp&s=b073aecb36c2292ef62f20908d55ac3701cc873f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=960&crop=smart&auto=webp&s=b60b4e5855a236d8c329f74047d58f703109dad6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?width=1080&crop=smart&auto=webp&s=db7b69c6fbcd72524fe23e067508ba63863f894f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/51hVndvBZ4jA9CDxI-4M3jXOMQ95LB_ksTzVRqU3dD4.jpg?auto=webp&s=c5db2d5fc1a9f94a687aa5586c0ed5ee8ea31421', 'width': 1200}, 'variants': {}}]}
ChatGPT pro/plus promo codes available!
0
Comment if you want one, I have manus also😁
2025-04-05T17:39:14
https://www.reddit.com/r/LocalLLaMA/comments/1js8y4w/chatgpt_proplus_promo_codes_available/
www-reseller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js8y4w
false
null
t3_1js8y4w
/r/LocalLLaMA/comments/1js8y4w/chatgpt_proplus_promo_codes_available/
false
false
self
0
null
1-year access codes
1
[removed]
2025-04-05T17:46:55
https://www.reddit.com/r/LocalLLaMA/comments/1js94fe/1year_access_codes/
Sandy_Zimmerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1js94fe
false
null
t3_1js94fe
/r/LocalLLaMA/comments/1js94fe/1year_access_codes/
false
false
self
1
null
I built an open source Computer-use framework that uses Local LLMs with Ollama
7
2025-04-05T18:09:18
https://github.com/trycua/cua
sandropuppo
github.com
1970-01-01T00:00:00
0
{}
1js9n36
false
null
t3_1js9n36
/r/LocalLLaMA/comments/1js9n36/i_built_an_open_source_computeruse_framework_that/
false
false
https://b.thumbs.redditm…bnC0NwGWvVTo.jpg
7
{'enabled': False, 'images': [{'id': 'WLe9Ccm5pSS0MXyUtFO9TWkjqrkmL9F0Qgg1LFzVoqY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=108&crop=smart&auto=webp&s=3dfd3152e10f00215bb442d29d0147f22d389ed0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=216&crop=smart&auto=webp&s=c15238170bdb92ac4143be9fd2f677edfb98ede0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=320&crop=smart&auto=webp&s=2f7219eff769b60db7e0d9e3aa649de0e6407e4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=640&crop=smart&auto=webp&s=624a29b9725e7bb2871c8940c2c220a294d0d3e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=960&crop=smart&auto=webp&s=d0ff1b422554e2334047744d1201441d259010d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?width=1080&crop=smart&auto=webp&s=73a3230a5c7bbd2f1fd3314abb402f4cd3219205', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2AUUbeOoZ7agjDBXWHt091L224zZyg21bhTPn_iKBqY.jpg?auto=webp&s=20101ff62910bbd1f7ffaad63ea1e26e6b4f1fe7', 'width': 1200}, 'variants': {}}]}
Presenting chat.md: fully editable chat interface with MCP support on any LLM [open source][MIT license]
24
# chat.md: The Hacker's AI Chat Interface [https://github.com/rusiaaman/chat.md](https://github.com/rusiaaman/chat.md) chat.md is a VS Code extension that turns markdown files into editable AI conversations * Edit past messages of user, assistant or tool responses and have the AI continue from any point. The file editor is the chat interface and the history. * LLM agnostic MCP support: no restrictions on tool calling on any LLM, even if they don't official support tool calling. * Press shift+enter to have AI stream its response in the chat.md file which is also the conversation history. * Tool calls are detected and tool execution results added in the file in an agentic loop. * Stateless. Switch the LLM provider at any point. Change the MCP tools at any point. * Put words in LLM's mouth - edit and have it continue from there Quick start: 1. Install chat.md vscode extension 2. Press Opt+Cmd+' (single quote) 3. Add your message in the user block and press "Shift+enter" **Your local LLM not able to follow tool call syntax?** Manually fix its tool use once (run the tool by adding a '# %% tool\_execute' block) so that it does it right the next time copying its past behavior.
2025-04-05T18:17:04
https://v.redd.it/c3gen35m22te1
Professor_Entropy
v.redd.it
1970-01-01T00:00:00
0
{}
1js9tkl
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/c3gen35m22te1/DASHPlaylist.mpd?a=1746469046%2CN2NlY2M3NDRjNTBjNTEyMTY0MjMxZjM2YTFmYmRmOTBkZmU3ODU5NTFhYjYxOGM4MjY2ZGVhMmU1ZGI0ZTQ1Mw%3D%3D&v=1&f=sd', 'duration': 187, 'fallback_url': 'https://v.redd.it/c3gen35m22te1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/c3gen35m22te1/HLSPlaylist.m3u8?a=1746469046%2CZmY0Njg3YTkyMTNhMWFlMjRjYTg0NGU0NTU1YjE5N2FjMjlmNDA0ZTZhYjkyNDA3ZjI3MmI3OTM0Yjc3YWY4Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c3gen35m22te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1186}}
t3_1js9tkl
/r/LocalLLaMA/comments/1js9tkl/presenting_chatmd_fully_editable_chat_interface/
false
false
https://external-preview…e11a0401e4757908
24
{'enabled': False, 'images': [{'id': 'MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=108&crop=smart&format=pjpg&auto=webp&s=6841f25da407324fd4584924623dad9e2828a2cf', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=216&crop=smart&format=pjpg&auto=webp&s=dfac34cba35f36ed366b089dc1051e9e465acec9', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=320&crop=smart&format=pjpg&auto=webp&s=bd718c679be16b053db83a99e2bfde3d15c6e7bf', 'width': 320}, {'height': 388, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=640&crop=smart&format=pjpg&auto=webp&s=cefbf5a46ed21cb9df9df2d63d0083f404d67915', 'width': 640}, {'height': 582, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=960&crop=smart&format=pjpg&auto=webp&s=c746dc01d6baa93992bfb9617dae1f0eba674de9', 'width': 960}, {'height': 655, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?width=1080&crop=smart&format=pjpg&auto=webp&s=468d9c9093d0e93b2a8ed0e7100da88b203a9377', 'width': 1080}], 'source': {'height': 874, 'url': 'https://external-preview.redd.it/MWxtZHE0NW0yMnRlMcCcpkXwt-WbbuHPY-2jNSelzM5Bm5bES2mvySmIn5un.png?format=pjpg&auto=webp&s=8ae9864ad46ea932181fafac75fe2f5a5bb62ea0', 'width': 1440}, 'variants': {}}]}
Automating Code Changelogs at a Large Bank with LLMs
1
2025-04-05T18:38:40
https://www.tensorzero.com/blog/case-study-automating-code-changelogs-at-a-large-bank-with-llms
bianconi
tensorzero.com
1970-01-01T00:00:00
0
{}
1jsabg8
false
null
t3_1jsabg8
/r/LocalLLaMA/comments/1jsabg8/automating_code_changelogs_at_a_large_bank_with/
false
false
https://b.thumbs.redditm…q92wBMcVl60g.jpg
1
{'enabled': False, 'images': [{'id': 'P-gzOWYQKP4LnfofqbNpRwMMhw_Ji3UHj7vOqo4Q4G0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=108&crop=smart&auto=webp&s=57605a69a66c511bda3522ace671a0384645f0c1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=216&crop=smart&auto=webp&s=0d82e6a996b3e0ae603e2d55a04ae21443046803', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=320&crop=smart&auto=webp&s=f58c323345224360de5e4544ec8df96f8014213c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=640&crop=smart&auto=webp&s=6c5ceb4cc8009aa13a4e94948769af815b1fed4a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=960&crop=smart&auto=webp&s=5b12a0db355ea72d426a1fa40b3a03f9e6a66f4d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?width=1080&crop=smart&auto=webp&s=7bfcbec5e8179ab9c3dc61486df49d2e7b651193', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JtmFfYHtP9kClsQnq7sKE3Kjh4Qi_lQbdW9AZ9JyOaw.jpg?auto=webp&s=0bbeefb32a62f482339c4ac33b91184eb0cce49b', 'width': 1200}, 'variants': {}}]}
Meta: Llama4
1,183
2025-04-05T18:38:40
https://www.llama.com/llama-downloads/
pahadi_keeda
llama.com
1970-01-01T00:00:00
0
{}
1jsabgd
false
null
t3_1jsabgd
/r/LocalLLaMA/comments/1jsabgd/meta_llama4/
false
false
https://b.thumbs.redditm…fpFGyL2Zqd8A.jpg
1,183
{'enabled': False, 'images': [{'id': '1trQDCltjlVYbHOLmQARC47fXdkjPeEmafqAlfJ_kDg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=108&crop=smart&auto=webp&s=385b09ce9767e534f968136ce7159ef8cd96a2d5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=216&crop=smart&auto=webp&s=bbfa32b4415e806faa84a7d8c7e1302611c6185f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=320&crop=smart&auto=webp&s=d6c3cc05f9ac22620d1c86baac3261383ce9397b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=640&crop=smart&auto=webp&s=e3c2d0eac2996298f7e242609a095f7deafa5ac1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=960&crop=smart&auto=webp&s=4ca7168d5b7e7e2cff5607a152e155f7a9633fdd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?width=1080&crop=smart&auto=webp&s=68bc537c15369ed71cdb05909dd272c91b153db3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/cwgFslgMUPL6p26FpnXYan8AI9J3Uz-yA2DZbRx4puk.jpg?auto=webp&s=e791a04c831670e0a0eb67f7bd228d636528e74a', 'width': 1200}, 'variants': {}}]}
Llama4 Released
67
2025-04-05T18:41:26
https://www.llama.com/llama4/
latestagecapitalist
llama.com
1970-01-01T00:00:00
0
{}
1jsadt3
false
null
t3_1jsadt3
/r/LocalLLaMA/comments/1jsadt3/llama4_released/
false
false
default
67
null