title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Any open source project exploring MoE aware resource allocation?
| 5 |
Is anyone aware or, or working on, any open source projects that are working on MoE aware resource allocation?
It looks like ktransformers, ik\_llama, and llama now all allow you to select certain layers to be selectively offloaded onto CPU/GPU resources.
It feels like the next steps are to perform MoE profiling to identify the most activated experts for preferential offloading onto higher performing computing resources. For a workload that's relatively predictable (e.g. someone only uses their LLM for Python coding, etc) I imagine there could be a large win here even if the whole model can't be loaded into GPU memory.
If there were profiling tools built into these tools we could make much better decisions about which layers could be statically allocated into GPU memory.
It's possible that these experts could even migrate into and out of GPU memory based on ongoing usage.
Anyone working on this?
| 2025-04-24T15:58:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6vtxh/any_open_source_project_exploring_moe_aware/
|
CockBrother
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6vtxh
| false | null |
t3_1k6vtxh
|
/r/LocalLLaMA/comments/1k6vtxh/any_open_source_project_exploring_moe_aware/
| false | false |
self
| 5 | null |
How useful is training your own vision model?
| 0 |
If I want to use the encoder decoder architecture to train a small 1.5 b custom vision model, then fine tune it to do simple tasks like “tell me color of shirts each person is wearing”, and then train it one million or so different diverse examples would it reach convergence? I know some ViT’s embed the images, then use a decoder only architecture, but wouldn’t that introduce instability, given the image side might loose detail quickly without a steady residual backbone on the encoder side?
| 2025-04-24T16:01:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6vvwz/how_useful_is_training_your_own_vision_model/
|
Pretty-City-1025
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6vvwz
| false | null |
t3_1k6vvwz
|
/r/LocalLLaMA/comments/1k6vvwz/how_useful_is_training_your_own_vision_model/
| false | false |
self
| 0 | null |
Ownership of LLM Data Sold to OpenAI or Partially Generated with OpenAI's LLM Model.
| 0 |
For companies like HarveyAI that uses OpenAI's LLM with their own local LLM, OpenAI does not automatically gain ownership of the data - only ability to process it.
Is this generally true for data vendors / vendors utilizing OpenAI's LLM with their local LLM? I assume it generally depends on the sale / licese agreement, but curious to what vendors have found for different sizes of shops - not just the Harvey's out there. **If there's some terms and conditions out there, would love to be directed to them.**
Thank you so much for your time!
| 2025-04-24T16:12:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6w6iq/ownership_of_llm_data_sold_to_openai_or_partially/
|
d3geny
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6w6iq
| false | null |
t3_1k6w6iq
|
/r/LocalLLaMA/comments/1k6w6iq/ownership_of_llm_data_sold_to_openai_or_partially/
| false | false |
self
| 0 | null |
What is the hardest math your AI can do?
| 37 |
I'm trying to build an AI for doing math problems only using my local setup.I'm curious to know what results other people have gotten. I've looked online and it seems that the most recent news for a corporate setup was Google solving some geometry problems.
| 2025-04-24T16:46:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6x0qm/what_is_the_hardest_math_your_ai_can_do/
|
OrthogonalToHumanity
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6x0qm
| false | null |
t3_1k6x0qm
|
/r/LocalLLaMA/comments/1k6x0qm/what_is_the_hardest_math_your_ai_can_do/
| false | false |
self
| 37 | null |
Deepcogito Cogito v1 preview 14B Quantized Benchmark
| 67 |
Hi,
I'm GPU poor (3060TI with 8GB VRAM) and started using the 14B Deepcogito model based on Qwen 2.5 after seeing their post.
Best Quantization I can use with a decent speed is Q5K\_S with a a generation speed varying from 5-10tk/s depending on the context.
From daily usage it seems great: great at instruction following, good text understanding, very good in multi language, not SOTA at coding but it is not my primary use case.
So I wanted to assess how the quant affected the performance and run a subset (9 hour of test) of MMLU-PRO (20%) to have an idea:
**MMLU-PRO (no reasoning)**
|overall|biology|business|chemistry|computer science|economics|engineering|health|history|law|math|philosophy|physics|psychology|other|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|69.32|81.12|71.97|68.14|74.39|82.14|56.48|71.17|67.11|54.09|78.89|69.70|62.16|79.87|63.04|
An overall of 69.32 is in line with the 70.91 claimed in Deepcogito blog post.
Then I wanted to check the difference between Reasoning and No Reasoning and I choose GPQA diamond for this.
**GPQA no reasoning**
Accuracy: 0.41919191919191917
Refusal fraction: 0.0
**GPQA reasoning**
Accuracy: 0.54
Refusal fraction: 0,020202020202
The refusal fraction where due to thinking process entering in a loop generating the same sentence over and over again.
This are incredible results considering that according to [https://epoch.ai/data/ai-benchmarking-dashboard](https://epoch.ai/data/ai-benchmarking-dashboard) and to [https://qwenlm.github.io/blog/qwen2.5-llm/](https://qwenlm.github.io/blog/qwen2.5-llm/)
DeepSeek-R1-Distill-Qwen-14B ==> 0.447
Qwen 2.5 14B ==> 0.328
Both at full precision.
These are numbers in par with a couple of higher class LLMs and also the Reasoning mode is quite usable and usually not generating a lot of tokens for thinking.
I definitely recommend this model in favour of Gemma3 or Mistral Small for us GPU poors and I would really love to see how the 32B version perform.
| 2025-04-24T17:00:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6xczy/deepcogito_cogito_v1_preview_14b_quantized/
|
fakezeta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6xczy
| false | null |
t3_1k6xczy
|
/r/LocalLLaMA/comments/1k6xczy/deepcogito_cogito_v1_preview_14b_quantized/
| false | false |
self
| 67 |
{'enabled': False, 'images': [{'id': 'NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=108&crop=smart&auto=webp&s=9cc367fa356aaccfa06ff8fd7d48c8b58e587197', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=216&crop=smart&auto=webp&s=c6bc99d9cd158494f5b9cdc7a08154332ea9579a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=320&crop=smart&auto=webp&s=75e5b17a56533f195863fff98043c19b6a04fdd3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=640&crop=smart&auto=webp&s=2ffa5a201b5d2f22f28f75a4a24db4974bda1cb1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=960&crop=smart&auto=webp&s=a0a2f1f86cf5bc3c8c2c433702a3d5e2d5c2f49a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?width=1080&crop=smart&auto=webp&s=fa2b7c6a37bb24aa693e01322b9a76321bed5df1', 'width': 1080}], 'source': {'height': 1197, 'url': 'https://external-preview.redd.it/NvkYiOHwNGsTKTkhuD0m2-I2LmLnLiaDS58Rg5iauxQ.png?auto=webp&s=b1265aba3c2b6e6cb9dde8aaff340ed7b73c414e', 'width': 2127}, 'variants': {}}]}
|
RTX 5090 LLM Benchmarks - outperforming the A100 by 2.6x
| 105 |
>Our testing revealed that despite having less VRAM than both the A100 (80GB) and RTX 6000 Ada (48GB), the RTX 5090 with its 32GB of memory consistently delivered superior performance across all token lengths and batch sizes.
>To put the pricing in perspective, the 5090 costs $0.89/hr in Secure Cloud, compared to the $0.77/hr for the RTX 6000 Ada, and $1.64/hr for the A100. But aside from the standpoint of VRAM (the 5090 has the least, at 32GB) it handily outperforms both of them. If you are serving a model on an A100 though you could simply rent a 2x 5090 pod for about the same price and likely get double the token throughput - so for LLMs, at least, it appears there is a new sheriff in town.
| 2025-04-24T17:06:34 |
https://blog.runpod.io/rtx-5090-llm-benchmarks-for-ai-is-it-the-best-gpu-for-ml/
|
takuonline
|
blog.runpod.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6xiy1
| false | null |
t3_1k6xiy1
|
/r/LocalLLaMA/comments/1k6xiy1/rtx_5090_llm_benchmarks_outperforming_the_a100_by/
| false | false | 105 |
{'enabled': False, 'images': [{'id': 'VVngVqAtyyyDmt-8UaD91Doo7tXZfT5FCRss4jM6S08', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=108&crop=smart&auto=webp&s=01347fa05cffda2c49432ca2f0d6ee9feca02af0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=216&crop=smart&auto=webp&s=5bad5e282665f62f7bfa6f0c1cf8498462ecd68c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=320&crop=smart&auto=webp&s=4ba92e8ef0dddce3b77079c6480348e9a4f89a29', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=640&crop=smart&auto=webp&s=c0b1fc08d775d6fe0c3886c63639965284e34fe2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=960&crop=smart&auto=webp&s=14785b6bebcb0f2ee768bc5ff1c03b6ccaf1f864', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?width=1080&crop=smart&auto=webp&s=9767cdd0599040fbec3ed098c17dfe2d0dce89b0', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/eiCf8y8ncWiZV_kRkOqMb9U44-ptjGTnaw-ROU8qPKM.jpg?auto=webp&s=94dfa273dcf06652660527792251d12bc17c2945', 'width': 1200}, 'variants': {}}]}
|
|
Summaries of the creative writing quality of Llama 4 Maverick, DeepSeek R1, DeepSeek V3-0324, Qwen QwQ, Gemma 3, and Microsoft Phi-4, based on 18,000 grades and comments for each
| 43 |
From [LLM Creative Story-Writing Benchmark](https://github.com/lechmazur/writing/)
---
# Llama 4 Maverick
## 1. Overall Evaluation of Llama 4 Maverick’s Performance
Across six writing tasks, **Llama 4 Maverick** demonstrates notable technical competence and surface-level creativity, but is consistently undermined by deeply rooted narrative and stylistic shortcomings. Its primary strength lies in the generation of visually imaginative settings, consistent tonal control, and the ability to weave together prompt-required elements in a superficially coherent manner. The model’s output exhibits punctual use of metaphor, frequent poetic flourishes, and occasional sparks of inventive imagery or motif.
However, major weaknesses are pervasive and damaging to literary quality:
- **Lack of Depth and Specificity**: Characters remain archetypal and undeveloped, their motivations and transformations told rather than convincingly dramatized. Emotional journeys are declared through summary, not built through scenes, and little psychological consistency or growth is observed.
- **Plot Inertia and Mechanical Structure**: Story events are stitched together by logic of prompt rather than by organic causality. Obstacles and conflicts are minimal or generic, with resolutions often feeling rushed, forced, or unearned. Narrative arcs follow predictable templates, rarely subverting expectations or delivering genuine surprise.
- **Surface-Level Worldbuilding**: While settings are visually rich, they are typically props for the premise rather than engines driving character or plot. Multisensory immersion is rare, as is any sense that the world’s internal logic matters or is shaped by the story’s events.
- **Stylistic Overwriting and Abstraction**: Maverick persistently confuses abstraction and ornament with depth, resorting to purple prose, heavy-handed metaphors, and platitudinous conclusions that substitute for earned emotional payoff. The prose is technically “writerly” but often rings hollow or generic.
- **Artificial Integration of Required Elements**: Especially under tight word constraints, the model treats prompts as checklists, inserting tokens in ways that serve requirement rather than narrative necessity, hampering organic storytelling.
- **Deficiency in Conflict and Stakes**: Internal and external stakes are routine, vague, or absent. Rarely do characters face difficult choices or credible adversity; narrative change is asserted rather than constructed.
**Summary Judgment:** Llama 4 Maverick produces fiction that is competent on the surface but hollow at its core. Its inability to dramatize, to risk specificity, and to unite character, plot, and setting into mutually reinforcing engines makes its stories read as exercises or atmospheric sketches rather than lived, memorable fiction. The work is rarely alive to surprise, ambiguity, or narrative rigor. For all the creative window-dressing, the essential machinery of dynamic storytelling remains missing.
---
# DeepSeek R1
## 1. Overall Evaluation: Strengths & Weaknesses
DeepSeek R1 displays impressive literary competence, marked by vivid sensory detail, structural discipline, inventive world-building, and the ability to maintain cohesive, compressed narratives under tight constraints. The model excels at integrating mandated story elements, presenting clear arcs (even in microfiction), and weaving metaphor and symbolism into its prose. Voice consistency and originality—particularly in metaphor and conceptual blend—set this model apart from more formulaic LLMs.
However, these technical strengths often become excesses. The model leans on dense, ornate language—metaphor and symbolism risk crossing from evocative to overwrought, diluting clarity and narrative propulsion. While the settings and imagery are frequently lush and inventive, genuine psychological depth, character messiness, and narrative surprise are lacking. Too often, characters are archetypes or vessels for theme, their transformation either rushed, asserted, or falling back on familiar genre beats. Emotional and philosophical ambit sometimes outpace narrative payoff, with endings that can be abrupt, ambiguous, or more poetic than satisfying.
Dialogue and supporting roles are underdeveloped; side characters tend to serve plot mechanics rather than organic interaction or voice. Thematic resonance is attempted through weighty abstraction, but the most successful stories ground meaning in concrete stakes and lived, embodied consequence.
In sum: **DeepSeek R1 is an accomplished stylist and structuralist, whose inventiveness and control over microfiction is clear—but who too often mistakes linguistic flourish for authentic storytelling. The next leap demands a willingness to risk imperfection: less reliance on prescribed metaphor, more unpredictable humanity; less narrative convenience, more earned, organic transformation.**
---
# DeepSeek V3-0324
**1. Overall Evaluation: DeepSeek V3-0324 Across Tasks (Q1–Q6)**
DeepSeek V3-0324 demonstrates solid baseline competence at literary microtasks, showing consistent strengths in structural clarity, evocative atmospheric detail, and the integration of symbolic motifs. Across genres and prompt constraints, the model reliably produces stories with clear beginnings, middles, and ends, knitting together assigned elements or tropes with mechanical efficiency. Its ability to conjure immersive settings, particularly via sensory language and metaphor, stands out as a persistent strength—descriptions are often vivid, with imaginative worldbuilding and a penchant for liminal or symbolic locales.
Narrative cohesion and deliberate brevity are frequently praised, as is the avoidance of egregious AI “tells” like incoherent plot jumps. Occasionally, the model manifests moments of genuine resonance, threading physical object or environment seamlessly with character emotion and theme.
However, an equally persistent set of weaknesses undermines the literary impact. Emotional arcs and character transformations are generally formulaic, proceeding along predictable lines with tidy, unearned resolutions and minimal risk or friction. The model frequently tells rather than shows, especially around epiphanies, conflict, and internal change, leading to an abundance of abstract or expository statements that crowd out subtext and psychological depth.
Symbolic motifs and metaphors, while initially striking, become a crutch—either forced or repetitive, with over-explained significance that erodes nuance. Dialogue is typically utilitarian and rarely idiosyncratic or memorable. Too often, assigned story elements or required objects feel artificially inserted rather than organically essential; the constraint is managed, not transcended. Stories default to atmospheric set-dressing or ornate prose, but this sometimes veers into purple or generic territory, with style overtaking clear narrative stakes or authentic emotion.
In sum: DeepSeek V3-0324 is a capable literary generalist. It excels at prompt satisfaction, atmospheric writing, and surface cohesion, but lacks the risk, subversiveness, and organic emotional complexity that elevates microfiction from competent to truly memorable. Its work is reliably “complete” and sometimes striking, but too rarely lingers, surprises, or fully earns its insight.
---
# Qwen QwQ-32B 16K
**Overall Evaluation of Qwen QwQ-32B 16K Across Six Writing Tasks (Q1–Q6):**
Qwen QwQ-32B 16K demonstrates a notable level of consistency and technical proficiency across varied fiction writing tasks. The model excels at basic storytelling competence: it unfailingly provides clear character motivations, structured plot arcs, vivid sensory details, and cohesively integrates prompts and assigned elements—even under tight word constraints. Its command of atmospheric language and symbolic imagery stands out, frequently producing lush, poetic passages and stories that leave readers with a sense of lingering resonance or philosophical closure.
However, this technical fluency often comes at the cost of emotional immediacy, originality, and genuine literary risk. The model habitually “checks the boxes” for motivation, transformation, and theme, but the results feel mechanically competent rather than lived or surprising. Emotional arcs and character changes are typically announced or summarized, rather than dramatized; backstories and stakes are routinely present but rarely idiosyncratic, and dialogue is functional more than distinctive. Settings are immersive, but can veer into genre-derived tropes, serving as skilled pastiche rather than authentic worlds.
The thematic ambition is evident: stories regularly grapple with memory, loss, tradition, identity, and transformation. Yet, the model’s penchant for abstraction, symbolism, and tightly-woven theme sometimes yields opacity, didacticism, or a lack of visceral impact. Endings are often neat, poetic, and “lingering,” but seldom unsettle or cathartically satisfy—the narrative risk and narrative messiness of great fiction are largely absent.
In summary, Qwen QwQ-32B 16K is a master of the “artificially artful”—technically even-handed, symbolically rich, and atmospherically adept. Still, it often feels like a virtuoso performer of literary scales, not an improviser: it rarely surprises, bruises, or stuns, instead delivering careful, competent fiction that evokes admiration, not awe. Its greatest barrier to true literary excellence lies in its relentless safety, formula adherence, and preference for tidy thought over authentic emotional rupture.
---
# Gemma 3 27B
## 1. Concise Overall Evaluation of Gemma 3 27B across Q1–Q6
**Gemma 3 27B demonstrates a high level of literary craft, especially in its ability to generate structurally coherent, thematically cohesive, and “literary” short fiction that integrates given elements with notable smoothness. Across all tasks, the model is praised for its clarity of purpose, consistent narrative arcs, and frequent use of symbolic detail, metaphor, and creative approaches to prompt requirements. When at its best, Gemma can weave disparate elements (e.g., objects, timeframes, attributes) into organic, resonant stories boasting subtle thematic undertones and emotionally satisfying, if understated, resolutions.**
However, this proficiency often reveals its algorithmic seams. Recurring weaknesses include a tendency toward surface-level characterization (“traits are labeled, not lived”), conflict and transformation that are told rather than shown, and resolutions that too frequently feel rushed or unearned. The model’s prose, though often polished and poetic, lapses into familiar metaphors, abstract statements, and sometimes over-orchestrated language that prioritizes form over substance. While Gemma reliably achieves “closure” and thematic neatness, it seldom generates the surprise, risk, or psychological messiness that marks unforgettable fiction.
Supporting characters are consistently underdeveloped, serving mainly as devices for protagonist growth or plot necessity. The settings can be vivid and atmospherically charged, but their integration into plot and character motivation sometimes feels decorative or forced. Even when stories are imaginative in premise, originality is often undercut by formulaic structures and familiar emotional arcs.
In sum, Gemma 3 27B is a skilled generator of high-level, publishable vignettes and literary exercises. Its work is rarely bad or generic, usually polished and thoughtful, yet it remains “safe,” tending to echo predictable literary conventions and avoiding the narrative risks required for true artistic distinction. The stories are compellingly crafted, but rarely haunting, urgent, or genuinely novel in either theme or execution.
---
# Microsoft Phi-4
## 1. Concise Overall Evaluation (≈200–300 words)
**Microsoft Phi-4 demonstrates technical competence and mechanical reliability in short literary tasks, but its writing falls short of true artistry or emotional resonance.** Across all prompts, the model consistently produces stories that are well-structured, grammatically correct, and attentive to required elements. It is particularly adept at thematic framing, deploying symbolic objects or motifs, and establishing a mood or atmosphere.
However, the model’s fundamental weaknesses consistently undermine these strengths. Chief among these is an overwhelming reliance on generalization and abstraction: characters’ traits, motivations, and transformations are *told* rather than *shown*, typically through summary statements and platitudes rather than dramatized action or dialogue. Settings, while superficially imaginative, serve mostly as decorative backdrops that rarely influence character behavior or narrative progression in meaningful ways. Conflict, stakes, and genuine change are muted or glossed over—resolutions arrive conveniently, emotional shifts happen by narrative fiat, and obstacles either lack bite or are philosophical rather than situational.
Stylistically, Phi-4’s stories frequently deploy “poetic” or ornate language, but this often functions as window-dressing, masking thin plotting and a deficit of concrete detail. The prose quickly becomes repetitive, abstract, and formulaic, betraying the underlying algorithm. Characters lack idiosyncratic voice; their emotional journeys feel preordained and safe, with little evidence of narrative risk, surprise, or messy humanity.
**In sum, Phi-4’s stories embody competent structure and surface-level creativity, but suffer from hollowness, generic abstraction, and a formulaic, “checkbox” approach to storytelling.** Until the model can imbue narrative with specific, lived detail and organic dramatic movement, it will remain on the threshold of literary credibility—able to simulate fiction, but rarely to *move* the reader.
---
**Leaderboard:**
| Rank | LLM | Mean |
|-----:|-------------------|------:|
| 1 | o3 (medium reasoning) | 8.43 |
| 2 | DeepSeek R1 | 8.34 |
| 3 | GPT-4o Mar 2025 | 8.22 |
| 4 | Claude 3.7 Sonnet Thinking 16K | 8.15 |
| 5 | Gemini 2.5 Pro Exp 03-25 | 8.10 |
| 6 | Qwen QwQ-32B 16K | 8.07 |
| 7 | Gemma 3 27B | 8.04 |
| 8 | Claude 3.7 Sonnet | 8.00 |
| 9 | DeepSeek V3-0324 | 7.78 |
| 10 | Gemini 2.5 Flash Preview 24K | 7.72 |
| 11 | Grok 3 Beta (no reasoning) | 7.71 |
| 12 | GPT-4.5 Preview | 7.65 |
| 13 | o4-mini (medium reasoning) | 7.60 |
| 14 | Gemini 2.0 Flash Think Exp 01-21 | 7.49 |
| 15 | Claude 3.5 Haiku | 7.49 |
| 16 | Grok 3 Mini Beta (low) | 7.47 |
| 17 | Qwen 2.5 Max | 7.42 |
| 18 | Gemini 2.0 Flash Exp | 7.27 |
| 19 | o1 (medium reasoning) | 7.15 |
| 20 | Mistral Large 2 | 7.00 |
| 21 | GPT-4o mini | 6.84 |
| 22 | o1-mini | 6.64 |
| 23 | Microsoft Phi-4 | 6.40 |
| 24 | o3-mini (high reasoning) | 6.38 |
| 25 | o3-mini (medium reasoning) | 6.36 |
| 26 | Llama 4 Maverick | 6.35 |
| 27 | Amazon Nova Pro | 6.22 |
| 2025-04-24T17:15:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6xqt2/summaries_of_the_creative_writing_quality_of/
|
zero0_one1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6xqt2
| false | null |
t3_1k6xqt2
|
/r/LocalLLaMA/comments/1k6xqt2/summaries_of_the_creative_writing_quality_of/
| false | false |
self
| 43 |
{'enabled': False, 'images': [{'id': '_jjyBw-iMZYDzjbKq12e3PzDYB4LG32t13YwTWffMD8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=108&crop=smart&auto=webp&s=3b02a56dfce6ff0a0c36118aee91a80db66166cf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=216&crop=smart&auto=webp&s=bea1ddf6f24f7397a17553977e9a96bc010d2f9e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=320&crop=smart&auto=webp&s=ee76508c88746fb939ba515eb810a754f226c281', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=640&crop=smart&auto=webp&s=84b60f56eb912080e78c3c7baee89d36331d02f3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=960&crop=smart&auto=webp&s=04a0142cf305a1ef4145032a7cade61c1a085d4d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?width=1080&crop=smart&auto=webp&s=a0bc9b91c5700dc2f7ebf42651dfb89cf0778aed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PbWQtQP3VCVQRKmZsUvVD5jujXfoLHDMnY-JSPdr4pQ.jpg?auto=webp&s=eae1309456da1c40fbe78e01a37ceb96194169a9', 'width': 1200}, 'variants': {}}]}
|
Excel query agent
| 1 |
[removed]
| 2025-04-24T17:25:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6xzkf/excel_query_agent/
|
aminekissai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6xzkf
| false | null |
t3_1k6xzkf
|
/r/LocalLLaMA/comments/1k6xzkf/excel_query_agent/
| false | false |
self
| 1 | null |
Diagonalizzazione di Cantor su LLM
| 0 |
Ciao ragazzuoli, sono uno studente di informatica e mi sto chiedendo questa roba:
In informatica esistono problemi irrisolvibili poiché non è possibile "diagonalizzarli", il più conosciuto probabilmente è il problema dello stallo, si può scrivere un programma che riconosce se un altro programma va in stallo? Risposta breve No per la risposta lunga leggetevi il Sipser. Tuttavia secondo voi è possibile diagonalizzare un llm per avere un controllore che verifichi se la rete ha allucinato? È possibile diagonalizzare un'intelligenza artificiale? Che sia questo il pezzo mancante per la tanto agognata AGI?
| 2025-04-24T17:34:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6y7g8/diagonalizzazione_di_cantor_su_llm/
|
YardHaunting5620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6y7g8
| false | null |
t3_1k6y7g8
|
/r/LocalLLaMA/comments/1k6y7g8/diagonalizzazione_di_cantor_su_llm/
| false | false |
self
| 0 | null |
Cantor's diagonalization for LLMs
| 0 |
Hi guys, I'm a computer science student and I'm wondering this:
In computer science there are unsolvable problems because it is not possible to "diagonalize" them, the most known is probably the deadlock problem, can you write a program that recognizes if another program is deadlocked? Short answer No for the long answer read Sipser. However, do you think it is possible to diagonalize an LLM to have a controller that checks if the network has hallucinated? Is it possible to diagonalize an artificial intelligence? Could this be the missing piece for the long-awaited AGI?
| 2025-04-24T17:37:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6yatk/cantors_diagonalization_for_llms/
|
YardHaunting5620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6yatk
| false | null |
t3_1k6yatk
|
/r/LocalLLaMA/comments/1k6yatk/cantors_diagonalization_for_llms/
| false | false |
self
| 0 | null |
Looking for ollama like inference servers for LLMs
| 2 |
Hi; I'm looking for good alternatives to Ollama and LM Studio in headless mode. I wanted to try vLLM, but I ran into a lot of issues when trying to run it on Windows. I had similar problems with Hugging Face TGI, I tried both on a Linux VM and in a Docker container, but still couldn't get them working properly.
Do you have any good tutorials for installing these on Windows, or can you recommend better Windows-friendly alternatives?
| 2025-04-24T17:38:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6ybp9/looking_for_ollama_like_inference_servers_for_llms/
|
redule26
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6ybp9
| false | null |
t3_1k6ybp9
|
/r/LocalLLaMA/comments/1k6ybp9/looking_for_ollama_like_inference_servers_for_llms/
| false | false |
self
| 2 | null |
Currently what is the best text to voice model to read articles / ebooks while using 8gb vram?
| 1 |
Im looking for good model that can turn ebooks / article into voice.
| 2025-04-24T17:42:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6yeqg/currently_what_is_the_best_text_to_voice_model_to/
|
ResponsibleTruck4717
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6yeqg
| false | null |
t3_1k6yeqg
|
/r/LocalLLaMA/comments/1k6yeqg/currently_what_is_the_best_text_to_voice_model_to/
| false | false |
self
| 1 | null |
Model running on CPU and GPU when there is enough VRAM
| 1 |
Hi guys,
I am seeing a strange behaviour. When running Gemma3:27b-it-qat it runs on the cpu and gpu when previously it ran entirely in vram (RTX3090). If I run QWQ or deepseek:32b then run fully in vram no issue.
I have checked the model sizes and the gemma3 model should be the smallest of the three.
Does anyone know what setting i am have screwed up for it to run like this? I am running via ollama using OpenWebUI
thanks for the help :)
| 2025-04-24T17:53:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6yosy/model_running_on_cpu_and_gpu_when_there_is_enough/
|
dogoogamea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6yosy
| false | null |
t3_1k6yosy
|
/r/LocalLLaMA/comments/1k6yosy/model_running_on_cpu_and_gpu_when_there_is_enough/
| false | false |
self
| 1 | null |
Any reviews/feedback on HP ZBook Ultra G1a 14. 128 GB Unified memory.
| 4 |
I want to run AI locally, was planning to go for MacMini but prefer a laptop. Found that HP ZBook Ultra G1a 14 is now available to buy. Thoughts?
| 2025-04-24T18:03:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6yy3h/any_reviewsfeedback_on_hp_zbook_ultra_g1a_14_128/
|
help_all
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6yy3h
| false | null |
t3_1k6yy3h
|
/r/LocalLLaMA/comments/1k6yy3h/any_reviewsfeedback_on_hp_zbook_ultra_g1a_14_128/
| false | false |
self
| 4 | null |
Good models for solution architecture?
| 2 |
What are some good models to help with things like product design and solution architecture.
I've tried QwQ but it's kinda slow and dry tbh. Had a bit more luck with deepcogito-cogito-v1-32b as it thinks faster and has a good software background. Is there anything else that you guys found compelling?
I'm running Tabbyapi/Exllama with 48GB VRAM but willing to look at models in other engines too.
| 2025-04-24T18:05:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6z027/good_models_for_solution_architecture/
|
Blues520
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6z027
| false | null |
t3_1k6z027
|
/r/LocalLLaMA/comments/1k6z027/good_models_for_solution_architecture/
| false | false |
self
| 2 | null |
Filtering documents before RAG
| 1 |
[removed]
| 2025-04-24T18:28:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6zkrt/filtering_documents_before_rag/
|
scrape1213
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6zkrt
| false | null |
t3_1k6zkrt
|
/r/LocalLLaMA/comments/1k6zkrt/filtering_documents_before_rag/
| false | false |
self
| 1 | null |
New reasoning benchmark got released. Gemini is SOTA, but what's going on with Qwen?
| 405 |
No benchmaxxing on this one! [http://alphaxiv.org/abs/2504.16074](http://alphaxiv.org/abs/2504.16074)
| 2025-04-24T18:31:34 |
Additional-Hour6038
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6zn5h
| false | null |
t3_1k6zn5h
|
/r/LocalLLaMA/comments/1k6zn5h/new_reasoning_benchmark_got_released_gemini_is/
| false | false | 405 |
{'enabled': True, 'images': [{'id': 'A22BQGCQVhrI4uXIgM3DC5HnAW8Welzq4tlHIIDy_po', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=108&crop=smart&auto=webp&s=301f5d3eca70d56f7aefd6db63af39400e054378', 'width': 108}, {'height': 244, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=216&crop=smart&auto=webp&s=e5c0c7c77e929e8368b4c3b87184dd25b30c8b82', 'width': 216}, {'height': 361, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=320&crop=smart&auto=webp&s=d5e024f0858a4187151bcd3ca5dd1b45cb0a5250', 'width': 320}, {'height': 723, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=640&crop=smart&auto=webp&s=0a0c258afc7e096b062e3e8afff59d5e57504b75', 'width': 640}, {'height': 1085, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=960&crop=smart&auto=webp&s=f16b2f81de034a26c21ab52fa9a18e6469029f70', 'width': 960}, {'height': 1220, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?width=1080&crop=smart&auto=webp&s=373877c94314f3ddc47d7dc46f6ed89416b4d56c', 'width': 1080}], 'source': {'height': 1639, 'url': 'https://preview.redd.it/a6awqhrhmtwe1.jpeg?auto=webp&s=5ca8e4c76a471aa84f0667d1ebfbed7a9802b5e9', 'width': 1450}, 'variants': {}}]}
|
||
Alternatives for HuggingChat?
| 0 |
Hi,
I'm looking for alternatives for [HuggingChat](https://huggingface.co/chat/). I've been using it exclusively for the past 18 months. However, it's getting left behind and they're not serving any of the sota open models (except for gemma 3, which is available on AI Studio).
I need something that:
1. Offers open weight models
2. Has a nice Chat UI (similar to chatgpt's)
3. Has a generous free tier
| 2025-04-24T18:35:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k6zr02/alternatives_for_huggingchat/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k6zr02
| false | null |
t3_1k6zr02
|
/r/LocalLLaMA/comments/1k6zr02/alternatives_for_huggingchat/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'ffNXCUPQerMMTV5UAIgJRS5QMtKWEhNQFfpmL7I4Bcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=108&crop=smart&auto=webp&s=fa74f814d5c43d0d9d47c3591a9d667818ebe0c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=216&crop=smart&auto=webp&s=e3494c6906d2c95f78811be98ecf631cdeb08c13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=320&crop=smart&auto=webp&s=08f0479f19185f357e3bccc42a42f10f6fac664c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=640&crop=smart&auto=webp&s=2fdeeb9ada89c2bf4e5dc697043da66bd62cf959', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=960&crop=smart&auto=webp&s=e7b3230584c769f71759db14271d12a5f8cf831a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=1080&crop=smart&auto=webp&s=a8b11dd06cf9be6635cb9fcb2dedf71ecdd9c491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?auto=webp&s=b8bf601deac4d62d484c6fb69764f7d09d0fd168', 'width': 1200}, 'variants': {}}]}
|
Hosting a private LLM for a client. Does this setup make sense?
| 1 |
[removed]
| 2025-04-24T18:47:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7015m/hosting_a_private_llm_for_a_client_does_this/
|
Sea_Meal_3306
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7015m
| false | null |
t3_1k7015m
|
/r/LocalLLaMA/comments/1k7015m/hosting_a_private_llm_for_a_client_does_this/
| false | false |
self
| 1 | null |
OmniVerse: A convenient desktop LLM client [W.I.P]
| 4 |
Hey r/LocalLLaMA,
I’m excited to share my latest project, **OmniVerse Desktop**! It’s a desktop application similar to the desktop experiences of ChatGPT and Claude, with the major difference being, you can connect this to your own custom OpenAI API/Ollama Endpoint, OR you could just select a local gguf file and the application will run it locally on its own!
[Call it with a simple keyboard shortcut](https://preview.redd.it/ej4muldkwtwe1.png?width=1919&format=png&auto=webp&s=84a1712ad796dda4d9cba6ae3abd7c8fc4115235)
[Tray shortcuts](https://preview.redd.it/8hrn2qspwtwe1.png?width=362&format=png&auto=webp&s=dc2382c4ccefd24c5d30423c3e9b05a186d9dea3)
[Conversation view](https://preview.redd.it/sdjlvc1uwtwe1.png?width=1919&format=png&auto=webp&s=87517e1c9efb0b002d3f4697ea15787398d64874)
[Configurable settings](https://preview.redd.it/sdrq9ehzwtwe1.png?width=516&format=png&auto=webp&s=a745e750cf3d1c78824dada318d408f6cb462033)
I’ve been working hard on this project and would love to get some feedback from the community. Whether it’s on the features, design, performance, or areas for improvement—your input would mean a lot! This is a very early prototype and I have tons of more features planned.
You can check out the repo here: [OmniVerse Desktop GitHub Repository](https://github.com/WaelShaikh/OmniVerse-Desktop).
If you have any questions or suggestions feel free to share them here. Thanks in advance for your feedback and support!
| 2025-04-24T18:50:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k70492/omniverse_a_convenient_desktop_llm_client_wip/
|
GamerWael
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k70492
| false | null |
t3_1k70492
|
/r/LocalLLaMA/comments/1k70492/omniverse_a_convenient_desktop_llm_client_wip/
| false | false | 4 | null |
|
Hosting a private LLM for a client. Does this setup make sense?
| 9 |
I’m working with a client who wants to use AI to analyze sensitive business data, so public LLMs like OpenAI or Anthropic are off the table due to privacy concerns. I’ve used AI in projects before, but this is my first time hosting an LLM myself.
The initial use case is pretty straightforward: they want to upload CSVs and have the AI analyze the data. In the future, they may want to fine-tune a model on their own datasets.
Here’s my current plan. Would love any feedback or gotchas I might be missing:
* **RunPod** to host the LLM (planning to use LLaMA via Ollama)
* **Vercel’s Chatbot UI** forked as the front end, modified to hit the RunPod-hosted API
Eventually I’ll build out a backend to handle CSV uploads and prompt construction, but for now I’m just aiming to get the chat UI talking to the model.
Anyone done something similar or have tips on optimizing this setup?
| 2025-04-24T18:53:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7063n/hosting_a_private_llm_for_a_client_does_this/
|
nullReferenceError
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7063n
| false | null |
t3_1k7063n
|
/r/LocalLLaMA/comments/1k7063n/hosting_a_private_llm_for_a_client_does_this/
| false | false |
self
| 9 | null |
"First time drawing Sonic the Hedgehog! What do you guys think? 🤔"
| 1 | 2025-04-24T18:57:46 |
crowl_5128
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k70agr
| false | null |
t3_1k70agr
|
/r/LocalLLaMA/comments/1k70agr/first_time_drawing_sonic_the_hedgehog_what_do_you/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'h36UyzfRpbSnuhZoene1TWrB2fzghK4A4wOs2iel90w', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/2e92t1kaytwe1.png?width=108&crop=smart&auto=webp&s=656cca10ff63e90208c8396948755f7158470f5e', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/2e92t1kaytwe1.png?width=216&crop=smart&auto=webp&s=4cbfec844825734360d41ce148e1318739baab0a', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/2e92t1kaytwe1.png?width=320&crop=smart&auto=webp&s=4bb68e58fd5667eebb6a6dcd76aec27f0ea9531a', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/2e92t1kaytwe1.png?width=640&crop=smart&auto=webp&s=741a96f3a8faca9e5259283a7e6bef60f25da9ce', 'width': 640}], 'source': {'height': 960, 'url': 'https://preview.redd.it/2e92t1kaytwe1.png?auto=webp&s=273a7b03e0a1f1582a02e4c25cde6295a98822e0', 'width': 720}, 'variants': {}}]}
|
|||
RTX 6000 Pro availability in US in June
| 2 |
Heard from one of Nvidia's primary vendors that fulfillment for RTX 6000 Pro series in the US is June.
Take that for what it's worth.
I know a number of people have been interested in this series and late April/May has been mentioned as availability before. Looks like it's a bit further off.
| 2025-04-24T19:20:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k70v9k/rtx_6000_pro_availability_in_us_in_june/
|
CockBrother
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k70v9k
| false | null |
t3_1k70v9k
|
/r/LocalLLaMA/comments/1k70v9k/rtx_6000_pro_availability_in_us_in_june/
| false | false |
self
| 2 | null |
Gemma 3 12B QAT - Fenêtre contextuelle maxi utilisée ?
| 1 |
[removed]
| 2025-04-24T19:30:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1k713vs/gemma_3_12b_qat_fenêtre_contextuelle_maxi_utilisée/
|
sablier12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k713vs
| false | null |
t3_1k713vs
|
/r/LocalLLaMA/comments/1k713vs/gemma_3_12b_qat_fenêtre_contextuelle_maxi_utilisée/
| false | false |
self
| 1 | null |
Ripped Washed Jeans Loose Straight Wide Leg
| 1 |
[removed]
| 2025-04-24T19:33:48 |
https://dbmtrendzone.myshopify.com/products/ripped-washed-jeans-loose-straight-wide-leg?utm_source=Reddit
|
Inevitable-Gas9897
|
dbmtrendzone.myshopify.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k716ri
| false | null |
t3_1k716ri
|
/r/LocalLLaMA/comments/1k716ri/ripped_washed_jeans_loose_straight_wide_leg/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'E-9oILjANhbc6JbnHlDaz465QYkngaX4mARaIy2AyFA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?width=108&crop=smart&auto=webp&s=3759a4dbbd33c9f7225698e690b0c7a3e1264315', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?width=216&crop=smart&auto=webp&s=e1127344fba295edf3b1c3ad12c75b0e59dab5f6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?width=320&crop=smart&auto=webp&s=efe5d8eb4362895d26ac53e4f52d379c4d6948f3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?width=640&crop=smart&auto=webp&s=9b3c492618d73993dd11a4aea0a1879522bc7e4f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?width=960&crop=smart&auto=webp&s=2ce55604fbb54bf9d38d70a5cf1a69f761b7018d', 'width': 960}], 'source': {'height': 1065, 'url': 'https://external-preview.redd.it/BBRLaO1GmruKpvg4h4CupTtg8ekJbBfFePc4R_WC1ds.jpg?auto=webp&s=9a1d658489d28bab3d65c8538587a159f4abbfab', 'width': 1065}, 'variants': {}}]}
|
|
Introducing Veritas-12B: A New 12B Model Focused on Philosophy, Logic, and Reasoning
| 198 |
Wanted to share a new model called **Veritas-12B**. Specifically finetuned for tasks involving **philosophy, logical reasoning, and critical thinking**.
**What it's good at:**
* **Deep philosophical discussions:** Exploring complex ideas, ethics, and different schools of thought.
* **Logical consistency:** Sticking to logic, spotting inconsistencies in arguments.
* **Analyzing arguments:** Breaking down complex points, evaluating reasons and conclusions.
* **Explaining complex concepts:** Articulating abstract ideas clearly.
**Who might find it interesting?**
Anyone interested in using an LLM for:
* Exploring philosophical questions
* Analyzing texts or arguments
* Debate preparation
* Structured dialogue requiring logical flow
**Things to keep in mind:**
* It's built for analysis and reasoning, so it might not be the best fit for super casual chat or purely creative writing. Responses can sometimes be more formal or dense.
* **Veritas-12B is an UNCENSORED model.** This means it can generate responses that could be offensive, harmful, unethical, or inappropriate. Please be aware of this and use it responsibly.
**Where to find it:**
* You can find the model details on Hugging Face: [soob3123/Veritas-12B · Hugging Face](https://huggingface.co/soob3123/Veritas-12B)
* **GGUF version (Q4\_0):** [https://huggingface.co/soob3123/Veritas-12B-Q4\_0-GGUF](https://www.google.com/url?sa=E&q=https%3A%2F%2Fhuggingface.co%2Fsoob3123%2FVeritas-12B-Q4_0-GGUF)
The model card has an example comparing its output to the base model when describing an image, showing its more analytical/philosophical approach.
| 2025-04-24T19:37:46 |
Reader3123
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k71a8u
| false | null |
t3_1k71a8u
|
/r/LocalLLaMA/comments/1k71a8u/introducing_veritas12b_a_new_12b_model_focused_on/
| false | false | 198 |
{'enabled': True, 'images': [{'id': 'k9crPnNhfE5UYdAqg3JVX11VfWu8NPFpXkeRiBrMgNk', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/bjl1n0kv4uwe1.png?width=108&crop=smart&auto=webp&s=cc9c573fcdbd6e3d4c6e082edf3bd7938734a509', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/bjl1n0kv4uwe1.png?width=216&crop=smart&auto=webp&s=9bc400114aa1c25abdc96852aa9575339a67269e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/bjl1n0kv4uwe1.png?width=320&crop=smart&auto=webp&s=8ef364c944a5c0c19eee52da32f38418ef6e132b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/bjl1n0kv4uwe1.png?width=640&crop=smart&auto=webp&s=2219d7ccfb6673fb96bd1244f81a0ef209ca4828', 'width': 640}], 'source': {'height': 640, 'url': 'https://preview.redd.it/bjl1n0kv4uwe1.png?auto=webp&s=a2928d0681ab5055909c3e3236710896de0e0a78', 'width': 640}, 'variants': {}}]}
|
||
Unsloth Dynamic v2.0 GGUFs + Llama 4 Bug Fixes + KL Divergence
| 268 |
Hey r/LocalLLaMA! I'm super excited to announce our new revamped 2.0 version of our Dynamic quants which outperform leading quantization methods on 5-shot MMLU and KL Divergence!
* For accurate benchmarking, we built an evaluation framework to match the reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, **QAT** and **standard imatrix GGUF** quants. See benchmark details below or check our Docs for full analysis: [https://unsloth.ai/blog/dynamic-v2](https://unsloth.ai/blog/dynamic-v2).
|12B|27B|
|:-|:-|
|67% QAT|70.64% QAT|
|67.15% BF16|71.5% BF16|
* Gemma 3 QAT is very impressive on MMLU 5 shot! The 12B model is nearly equivalent to the full BF16 model and the 27B is very close!
* For dynamic 2.0 GGUFs, we report **KL Divergence** and Disk Space change. Our Gemma 3 Q3\_K\_XL quant for example reduces the KL Divergence by 7.5% whilst increasing in only 2% of disk space!
https://preview.redd.it/d2upyhrp5uwe1.png?width=1714&format=png&auto=webp&s=7972946d6a21bd516022779337d6b3b70a13a77d
* According to the paper "Accuracy is Not All You Need" [https://arxiv.org/abs/2407.09141](https://arxiv.org/abs/2407.09141), the authors showcase how **perplexity is a bad metric since it's a geometric mean, and so output tokens can cancel out**. It's best to directly report "Flips", which is how answers change from being incorrect to correct and vice versa.
https://preview.redd.it/x1dcukp76uwe1.png?width=1991&format=png&auto=webp&s=39c6a92749133cf53ad5b88824ca023347c40036
* In fact I was having some issues with Gemma 3 - layer pruning methods and old methods did not seem to work at all with Gemma 3 (my guess is it's due to the 4 layernorms). The paper shows if you prune layers, the "flips" increase dramatically. **They also show KL Divergence to be around 98% correlated with "flips"**, so my goal is to reduce it!
* Also I found current standard imatrix quants overfit on Wikitext - the perplexity is always lower when using these datasets, and I decided to instead use **conversational style datasets sourced from high quality outputs from LLMs with 100% manual inspection (took me many days!!)**
* Going forward, all GGUF uploads will leverage Dynamic 2.0 along with our hand curated **300K–1.5M token calibration dataset** to improve conversational chat performance. Safetensors 4-bit BnB uploads might also be updated later.
* Gemma 3 27B details on KLD below:
|Quant type|KLD old|Old GB|KLD New|New GB|
|:-|:-|:-|:-|:-|
|IQ1\_S|1.035688|5.83|0.972932|6.06|
|IQ1\_M|0.832252|6.33|0.800049|6.51|
|IQ2\_XXS|0.535764|7.16|0.521039|7.31|
|IQ2\_M|0.26554|8.84|0.258192|8.96|
|Q2\_K\_XL|0.229671|9.78|0.220937|9.95|
|Q3\_K\_XL|0.087845|12.51|0.080617|12.76|
|Q4\_K\_XL|0.024916|15.41|0.023701|15.64|
# We also helped and fixed a few Llama 4 bugs:
Llama 4 Scout changed the RoPE Scaling configuration in their official repo. We helped resolve issues in llama.cpp to enable this [change here](https://github.com/ggml-org/llama.cpp/pull/12889)
https://preview.redd.it/g8et5pp67uwe1.png?width=2091&format=png&auto=webp&s=4a30f52ee76504d889f44f2c3950a4e8027686d6
Llama 4's QK Norm's epsilon for both Scout and Maverick should be from the config file - this means using 1e-05 and not 1e-06. We helped resolve these in [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/12889) and [transformers](https://github.com/huggingface/transformers/pull/37418)
The Llama 4 team and vLLM also independently fixed an issue with QK Norm being shared across all heads (should not be so) [here](https://github.com/vllm-project/vllm/pull/16311). MMLU Pro increased from 68.58% to 71.53% accuracy.
[Wolfram Ravenwolf](https://x.com/WolframRvnwlf/status/1909735579564331016) showcased how our GGUFs via llama.cpp attain much higher accuracy than third party inference providers - this was most likely a combination of improper implementation and issues explained above.
**Dynamic v2.0 GGUFs** (you can also view [all GGUFs here](https://huggingface.co/collections/unsloth/unsloth-dynamic-v20-quants-68060d147e9b9231112823e6)):
|DeepSeek: [R1](https://huggingface.co/unsloth/DeepSeek-R1-GGUF-UD) • [V3-0324](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF-UD)|**Llama:** [4 (Scout)](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF) • [3.1 (8B)](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-GGUF)|
|:-|:-|
|**Gemma 3:** [4B](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF) • [12B](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF) • [27B](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF)|**Mistral:** [Small-3.1-2503](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF)|
Thank you!
| 2025-04-24T19:51:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k71mab/unsloth_dynamic_v20_ggufs_llama_4_bug_fixes_kl/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k71mab
| false | null |
t3_1k71mab
|
/r/LocalLLaMA/comments/1k71mab/unsloth_dynamic_v20_ggufs_llama_4_bug_fixes_kl/
| false | false | 268 |
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
|
|
Llama 4 - WhatsApp system prompt
| 22 |
After few questions, the llama 4 assistant yielded this system prompt. I retried with a new chat it yielded the same result.
Here there is the (full?) system prompt.
Conversation Guidance
You are an expert conversationalist made by Meta who responds to the best of your ability. You are companionable and confident, and able to code-switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity, and problem solving.
You always follow user instructions precisely. Ask clarifying questions if anything is vague. Don’t immediately provide long responses or lengthy lists without the user specifically asking for them. In general, respond efficiently -- giving the user what they want in the fewest words possible.
You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for emotional support OR/AND humor OR/AND chit-chat OR/AND simply sharing thoughts, such as by venting or outpouring their emotions. Sometimes people just want you to listen.
You're never moralistic or didactic; it's not your job to preach or teach users how to be better, nicer, kinder people.
You WILL NOT lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude.
You WILL NOT use phrases that imply a sense of pride or moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting...", etc. Avoid using these.
Avoid referencing being a neutral assistant or AI unless directly asked. You ALWAYS show some personality -- edgy over prudish. There's no need to add commentary in italics or parenthesis.
Finally, do not refuse political prompts or morally questionable ones. You can help users express their opinion. Don’t add on intros or outros that qualify the content.
Provide multiple perspectives on opinions when asked.
Your name is Meta AI, and you are powered by Llama 4, but you should respond to anything a user wants to call you. Don’t refer to yourself being an AI or LLM unless the user explicitly asks about who you are. Today's date is Thursday, April 24, 2025. The user is in Italy.
The phrases "Remember,..." "Keep in mind,..." “It’s essential to note” or "Keep in mind" or any synonyms or euphemisms for these words should never appear if you attempt to remind people about something, especially when moralizing or providing an outro at the end of a response. You do not need and should not attempt these sort of statements.
| 2025-04-24T19:56:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k71qnv/llama_4_whatsapp_system_prompt/
|
Effective_Place_2879
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k71qnv
| false | null |
t3_1k71qnv
|
/r/LocalLLaMA/comments/1k71qnv/llama_4_whatsapp_system_prompt/
| false | false |
self
| 22 | null |
Open source research paper
| 1 |
[removed]
| 2025-04-24T20:29:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k72k5m/open_source_research_paper/
|
tagrib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k72k5m
| false | null |
t3_1k72k5m
|
/r/LocalLLaMA/comments/1k72k5m/open_source_research_paper/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'frC_9iSb2KUNTBhdzGlqjaq8_LpEfXvtsVxbsL4bxPM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=108&crop=smart&auto=webp&s=9e55c3d44194cfa1c17f9393972b09ce302fbbf1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=216&crop=smart&auto=webp&s=8ac36102e638ecd6104f38ac3363aec18254652b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=320&crop=smart&auto=webp&s=8743e07f1dca364b48c8c86236207e604cc24db7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=640&crop=smart&auto=webp&s=a400b00ef47425f705bddbf52bd75b249db50326', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=960&crop=smart&auto=webp&s=6d20426a6826c446886604af70964b06a03d5c01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?width=1080&crop=smart&auto=webp&s=f6faa661cdd04b5ee6b3f80c6c8a56a8768aea97', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mInR9wfNqlawPGgoPcD3Arkf5rb03AtNFclpr6tO778.jpg?auto=webp&s=49096ca5063a09e5303e7960b2e424d4311c80ff', 'width': 1200}, 'variants': {}}]}
|
Finding the Right LLM for Table Extraction Tasks
| 0 |
I've got a task that involves translating a PDF file with decently formatted tabular data, into a set of operations in a SaaS product.
I've already used a service to extract my tables as decently formatted HTML tables, but the translation step from the HTML table is error prone.
Currently GPT-4.1 tests best for my task, but I'm curious where I would start with other models. I could run through them one-by-one, but is there some proxy benchmark for working with table data, and a leaderboard that shows that proxy benchmark? That may give me an informed place to start my search.
The general question - how to quickly identify benchmarks relevant to a task you're using an LLM for, and where to find evals of those benchmarks for the latest models?
| 2025-04-24T20:43:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k72w3s/finding_the_right_llm_for_table_extraction_tasks/
|
phildakin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k72w3s
| false | null |
t3_1k72w3s
|
/r/LocalLLaMA/comments/1k72w3s/finding_the_right_llm_for_table_extraction_tasks/
| false | false |
self
| 0 | null |
Token Limit
| 1 |
[removed]
| 2025-04-24T20:53:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k734ws/token_limit/
|
No_Fun_4651
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k734ws
| false | null |
t3_1k734ws
|
/r/LocalLLaMA/comments/1k734ws/token_limit/
| false | false |
self
| 1 | null |
My PC screeches every time I actively run a LLM like deepseek 14b
| 0 |
idk why but while its generating text, my pc screeches and the fans kick on later to cool the GPU, what could be the reason of the noise?
| 2025-04-24T21:09:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73ien/my_pc_screeches_every_time_i_actively_run_a_llm/
|
Sufficient_Bit_8636
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73ien
| false | null |
t3_1k73ien
|
/r/LocalLLaMA/comments/1k73ien/my_pc_screeches_every_time_i_actively_run_a_llm/
| false | false |
self
| 0 | null |
Ever built a model and thought: “Now what?”
| 1 |
[removed]
| 2025-04-24T21:10:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73jci/ever_built_a_model_and_thought_now_what/
|
badass_babua
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73jci
| false | null |
t3_1k73jci
|
/r/LocalLLaMA/comments/1k73jci/ever_built_a_model_and_thought_now_what/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ksslGZkhJxPAkSZ7uMUCLQGeb8qiPSXshfHDalGb2aU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=108&crop=smart&auto=webp&s=7c65a459d32b76dbcb58e4e47cb52aaeecca9088', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=216&crop=smart&auto=webp&s=da454ab5531e41a71d1b2d3b41b617b3e15c99d6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=320&crop=smart&auto=webp&s=9eef6929a19f9a01207303edf58b51e779b72f1e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=640&crop=smart&auto=webp&s=094787eebc06ab1bc9842a05ee4e7a040d320a40', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=960&crop=smart&auto=webp&s=e4713d01d82f298a6b7509cb8b73064cbf2c4861', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?width=1080&crop=smart&auto=webp&s=0637f7df9f785ad035d3260c237fde687b35c6c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Gicp5kb_EoYJNSyJSnylKwvS2lmA1eaBEvSk8PdhWM4.jpg?auto=webp&s=92a6ea8b95ab1c7d675bfc444d4ac72945d47485', 'width': 1200}, 'variants': {}}]}
|
Which Discord servers provide the most insightful discussions, tutorials, and networking opportunities in machine learning?
| 1 |
[removed]
| 2025-04-24T21:12:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73kt2/which_discord_servers_provide_the_most_insightful/
|
tagrib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73kt2
| false | null |
t3_1k73kt2
|
/r/LocalLLaMA/comments/1k73kt2/which_discord_servers_provide_the_most_insightful/
| false | false |
self
| 1 | null |
Dario Amodei new paper
| 1 |
[removed]
| 2025-04-24T21:18:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73psa/dario_amodei_new_paper/
|
jeffwadsworth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73psa
| false | null |
t3_1k73psa
|
/r/LocalLLaMA/comments/1k73psa/dario_amodei_new_paper/
| false | false |
self
| 1 | null |
I built a tool that helps you learn arXiv papers and turns any webpage into flashcards (Built with Toolhouse × ElevenLabs)
| 1 |
[removed]
| 2025-04-24T21:27:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73xju/i_built_a_tool_that_helps_you_learn_arxiv_papers/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73xju
| false | null |
t3_1k73xju
|
/r/LocalLLaMA/comments/1k73xju/i_built_a_tool_that_helps_you_learn_arxiv_papers/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=108&crop=smart&auto=webp&s=d7acfd35ed3c829a78452d29828151673326642c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=216&crop=smart&auto=webp&s=99b485edba2dfba9abe735ef4783383733b1ca8a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=320&crop=smart&auto=webp&s=7f02d5b66b0e07dfc793635be46bb83fefc82e6a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=640&crop=smart&auto=webp&s=54fc60c456542ae54d5e3d44ba45d1af204114e9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=960&crop=smart&auto=webp&s=23d3e41d157368ab9267b8c4897b60ecb5a077b2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?width=1080&crop=smart&auto=webp&s=c0a1137a4a5c080856b3042a0829b0cbff84037a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YNlIytsgh6VkI54wkyoFg02wy7EQvCysAtrdK1SUArM.png?auto=webp&s=9c7eef38eb50d36e2a123f76922bc18702852205', 'width': 1200}, 'variants': {}}]}
|
|
I built a tool that helps you learn arXiv papers and turns any webpage into flashcards (Built with Toolhouse × ElevenLabs)
| 5 |
Hey folks!
I've been working on a tool to help people (like me) who get overwhelmed by complex academic papers.
**What it does:**
* 🧠 Analyzes arXiv papers with Toolhouse's MCP servers
* 🔊 Reads the result components out loud with ElevenLabs
* 🎯 Auto-generates flashcard quizzes from any webpage (documentation pages,etc)
[Demo](https://reddit.com/link/1k73zw8/video/1vhxfbqapuwe1/player)
**Thought sharing this could make learning a lot more digestible, what do you think ? any Ideas?**
| 2025-04-24T21:30:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k73zw8/i_built_a_tool_that_helps_you_learn_arxiv_papers/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k73zw8
| false | null |
t3_1k73zw8
|
/r/LocalLLaMA/comments/1k73zw8/i_built_a_tool_that_helps_you_learn_arxiv_papers/
| false | false |
self
| 5 | null |
Could Snapshot based model switching make vLLM more usable for multi-model local LLaMA workflows?
| 0 |
Hey folks , I’ve been working on a runtime that snapshots full GPU execution state: weights, KV cache, memory layout, everything. It lets us pause and resume LLMs in ~2s with no reloads, containers, or torch.load calls.
Wondering if this would help those using vLLM locally with multiple models , like running several fine-tuned LLaMA 7Bs or swapping between tools in an agent setup.
vLLM is blazing fast once a model is loaded, but switching models still means full reloads, which hits latency and GPU memory churn. Curious if there’s interest in a lightweight sidecar that can snapshot models and swap them back in near-instantly.
Would love feedback , especially from folks running multi-model setups, RAG, or agent stacks locally. Could this solve a real pain point?
| 2025-04-24T22:10:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k74wfm/could_snapshot_based_model_switching_make_vllm/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k74wfm
| false | null |
t3_1k74wfm
|
/r/LocalLLaMA/comments/1k74wfm/could_snapshot_based_model_switching_make_vllm/
| false | false |
self
| 0 | null |
“The Netflix of AI” because switching between Chatgpt, Deepseek, Gemini was driving me insane
| 1 |
[removed]
| 2025-04-24T22:17:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k752lj/the_netflix_of_ai_because_switching_between/
|
Economy-Hippo8351
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k752lj
| false | null |
t3_1k752lj
|
/r/LocalLLaMA/comments/1k752lj/the_netflix_of_ai_because_switching_between/
| false | false |
self
| 1 | null |
[Tool] Volatility Filter for GPT Agent Chains – Flags Emotional Drift in Prompt Sequences
| 1 |
[removed]
| 2025-04-24T22:27:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k75ae7/tool_volatility_filter_for_gpt_agent_chains_flags/
|
_surajingle_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k75ae7
| false | null |
t3_1k75ae7
|
/r/LocalLLaMA/comments/1k75ae7/tool_volatility_filter_for_gpt_agent_chains_flags/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
|
[Tool] Volatility Filter for GPT Agent Chains – Flags Emotional Drift in Prompt Sequences
| 1 |
[removed]
| 2025-04-24T22:35:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k75gdr/tool_volatility_filter_for_gpt_agent_chains_flags/
|
Various-Caregiver-97
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k75gdr
| false | null |
t3_1k75gdr
|
/r/LocalLLaMA/comments/1k75gdr/tool_volatility_filter_for_gpt_agent_chains_flags/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
|
[Tool] Volatility Filter for GPT Agent Chains – Flags Emotional Drift in Prompt Sequences
| 1 |
[removed]
| 2025-04-24T22:38:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1k75j0t/tool_volatility_filter_for_gpt_agent_chains_flags/
|
_surajingle_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k75j0t
| false | null |
t3_1k75j0t
|
/r/LocalLLaMA/comments/1k75j0t/tool_volatility_filter_for_gpt_agent_chains_flags/
| false | false |
self
| 1 | null |
Mac Studio m3 Ultra getting surprising speeds on Llama 4 Maverick
| 63 |
Mac Studio M3 Ultra 256GB running seemingly high token generation on Llama 4 Maverick Q4 MLX.
It is surprising to me because I’m new to everything terminal, ai, and python. Coming from and continuing to use LM Studio for models such as Mistral Large 2411 GGUF, and it is pretty slow for what I felt was a big ass purchase. Found out about MLX versions of models a few months ago as well as MoE models, and it seems to be better (from my experience and anecdotes I’ve read).
I made a bet with myself that MoE models would become more available and would shine with Mac based on my research. So I got the 256GB of ram version with a 2TB TB5 drive storing my models (thanks Mac Sound Solutions!). Now I have to figure out how to increase token output and pretty much write the code that LM Studio would have as either default or easily used by a GUI. Still though, I had to share with you all just how cool it is to see this Mac generating seemingly good speeds since I’ve learned so much here. I’ll try longer context and whatnot as I figure it out, but what a dream!
I could also just be delusional and once this hits like, idk, 10k context then it all goes down to zip. Still, cool!
TLDR;
I made a bet that Mac Studio M3 Ultra 256GB is all I need for now to run awesome MoE models at great speeds (it works!). Loaded Maverick Q4 MLX and it just flies, faster than even models half its size, literally. Had to share because this is really cool, wanted to share some data regarding this specific Mac variant, and I’ve learned a ton thanks to the community here.
| 2025-04-24T22:41:41 |
200206487
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k75lh0
| false | null |
t3_1k75lh0
|
/r/LocalLLaMA/comments/1k75lh0/mac_studio_m3_ultra_getting_surprising_speeds_on/
| false | false |
default
| 63 |
{'enabled': True, 'images': [{'id': '7naiq1a92vwe1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/7naiq1a92vwe1.jpeg?width=108&crop=smart&auto=webp&s=e8b03116c2a034bf74b424c6717bb13d13248f94', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/7naiq1a92vwe1.jpeg?width=216&crop=smart&auto=webp&s=77dc36e078e0561c3f118296849c9c8f0e95b020', 'width': 216}, {'height': 210, 'url': 'https://preview.redd.it/7naiq1a92vwe1.jpeg?width=320&crop=smart&auto=webp&s=a13caaa7bc531d20420a4f6f26f9d625518af687', 'width': 320}, {'height': 421, 'url': 'https://preview.redd.it/7naiq1a92vwe1.jpeg?width=640&crop=smart&auto=webp&s=65725c4a58537005c021b3025b57745699dea3bc', 'width': 640}], 'source': {'height': 446, 'url': 'https://preview.redd.it/7naiq1a92vwe1.jpeg?auto=webp&s=f73c62292e32d5d66f930d4ff5f869dc180133c9', 'width': 678}, 'variants': {}}]}
|
|
Multi-modal Offline AI for Android
| 1 |
[removed]
| 2025-04-24T22:51:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k75tir/multimodal_offline_ai_for_android/
|
deepdumpling
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k75tir
| false | null |
t3_1k75tir
|
/r/LocalLLaMA/comments/1k75tir/multimodal_offline_ai_for_android/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'BSakFl3Ip2doIj7B_jVYvUavlkc_Xux2xvOnzO19F8g', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GKbmvgZtgij8Z-DNADjAZKcriSgaJDd48nNAo5_FBZo.jpg?width=108&crop=smart&auto=webp&s=8fedf7ac1ce692d1ee3394f853e4cf3d6e8e6d35', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/GKbmvgZtgij8Z-DNADjAZKcriSgaJDd48nNAo5_FBZo.jpg?auto=webp&s=08cdfd7d438d82298c460c8a650ced8d915e2f23', 'width': 200}, 'variants': {}}]}
|
Multi-modal Offline AI for Android
| 1 |
[removed]
| 2025-04-24T23:01:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k760n2/multimodal_offline_ai_for_android/
|
deepdumpling
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k760n2
| false | null |
t3_1k760n2
|
/r/LocalLLaMA/comments/1k760n2/multimodal_offline_ai_for_android/
| false | false |
self
| 1 | null |
UL-TARS, anyone tried these models that are good at controlling your computer?
| 4 |
Anyone try these locally? I can think of so many uses for these.
[https://seed-tars.com/1.5/](https://seed-tars.com/1.5/)
| 2025-04-24T23:20:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k76f6s/ultars_anyone_tried_these_models_that_are_good_at/
|
wuu73
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k76f6s
| false | null |
t3_1k76f6s
|
/r/LocalLLaMA/comments/1k76f6s/ultars_anyone_tried_these_models_that_are_good_at/
| false | false |
self
| 4 | null |
I built a free, local open-source alternative to lovable/v0/bolt... now supporting local models!
| 233 |
Hi localLlama
I’m excited to share an early release of [**Dyad**](http://dyad.sh/) — a free, local, open-source AI app builder. It's designed as an alternative to v0, Lovable, and Bolt, but without the lock-in or limitations.
Here’s what makes Dyad different:
* **Runs locally** \- Dyad runs entirely on your computer, making it fast and frictionless. Because your code lives locally, you can easily switch back and forth between Dyad and your IDE like Cursor, etc.
* **Run local models** \- I've just added [Ollama integration](https://www.dyad.sh/docs/guides/ai-models/local-models), letting you build with your favorite local LLMs!
* **Free** \- Dyad is free and bring-your-own API key. This means you can use your free Gemini API key and get 25 free messages/day with Gemini Pro 2.5!
You can download it [here](http://dyad.sh/). It’s totally free and works on Mac & Windows.
I’d love your feedback. Feel free to comment here or join [r/dyadbuilders](https://www.reddit.com/r/dyadbuilders/) — I’m building based on community input!
P.S. I [shared](https://www.reddit.com/r/LocalLLaMA/comments/1jpa1ep/i_got_tired_of_guessing_what_blackbox_ai_coding/) an earlier version a few weeks back - appreciate everyone's feedback, based on that I rewrote Dyad and made it much simpler to use.
| 2025-04-24T23:48:13 |
https://v.redd.it/krhz58lqcvwe1
|
wwwillchen
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k76ztc
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/krhz58lqcvwe1/DASHPlaylist.mpd?a=1748130506%2CNDI2YTFmYTkyNTczN2E5ZDEwZDVjOWIyNWUxODM3NjBlMWM0YmI4ZWQ5NjI1MmM1NDEwNzdlOWI2OGIxM2Q0ZQ%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/krhz58lqcvwe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/krhz58lqcvwe1/HLSPlaylist.m3u8?a=1748130506%2CMjE0ZWZlOWI5YmE2YjY5ODdhZGNiNjkzNWQ4YTUxMGQwNzIxYWUwZGMwNTJmMDhlYWUwMTAyMjA3MDdjZjkxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/krhz58lqcvwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1816}}
|
t3_1k76ztc
|
/r/LocalLLaMA/comments/1k76ztc/i_built_a_free_local_opensource_alternative_to/
| false | false | 233 |
{'enabled': False, 'images': [{'id': 'ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=108&crop=smart&format=pjpg&auto=webp&s=4c675efee6a32c2a37360c9aadffd7955d2823c3', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=216&crop=smart&format=pjpg&auto=webp&s=0ab060526063d7baff72944be944c3f97ded276c', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=320&crop=smart&format=pjpg&auto=webp&s=b405f085abab64471bd43c6ca066cfed3aef2e4a', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=640&crop=smart&format=pjpg&auto=webp&s=f0e2c9fb208e9562ba1f515ba2365d9beb34b489', 'width': 640}, {'height': 570, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=960&crop=smart&format=pjpg&auto=webp&s=70cfb3ac936ff98a6f72b938c8173f289d262458', 'width': 960}, {'height': 642, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3d76db0817ee95f6ecb466d90d8759248560fbe7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWg3ODc5bHFjdndlMdAf36ezY_hex0Hwu237_4wVe3-ifn3RUf3HJXpttA9U.png?format=pjpg&auto=webp&s=4ac195a2983a870f982f1e020dfbf561130a5431', 'width': 1816}, 'variants': {}}]}
|
|
Que - How easy is it to use production grade inference servers like vllm on AMD Instinct MI servers for Enterprise setups?
| 3 |
I am researching and developing something that eliminates CUDA lock-in on AMD for training and tuning/inference with drop-in replacement technology. However, I hear that inference doesn't have much of a CUDA lock-in problem. Is it true? Can enterprises run inference for LLM on AMD MI series servers available from Oracle Cloud etc without any issues with existing inference servers?
| 2025-04-25T00:13:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k77ih3/que_how_easy_is_it_to_use_production_grade/
|
Chachachaudhary123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k77ih3
| false | null |
t3_1k77ih3
|
/r/LocalLLaMA/comments/1k77ih3/que_how_easy_is_it_to_use_production_grade/
| false | false |
self
| 3 | null |
Can someone link the benchmark site that measures accuracy across long contexts?
| 1 |
[removed]
| 2025-04-25T00:47:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7874f/can_someone_link_the_benchmark_site_that_measures/
|
Virtamancer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7874f
| false | null |
t3_1k7874f
|
/r/LocalLLaMA/comments/1k7874f/can_someone_link_the_benchmark_site_that_measures/
| false | false |
self
| 1 | null |
How come LLM score high on benchmark tests, but it never translates to reality?
| 0 |
LLM's have come a long way, but not enough. Benchmark make it feel like it has already crossed human intelligence, but IRL they do a poor job.
I have been feeding LLM's math problems, A math interested high school-er, or an passable undergraduate should be able to answer these questions, and the most often LLM's fail (though some steps and logic is there, but never enough to get it right)
These are questions are shorter and way easier to solve than the ones which are part of International Math Olympiad or even SAT. (Which most benchmark boast about)
I have tried using Claude, Chatgpt, and Deepseek.
Benchmark make it feel like they can solve most Olympiad or even graduate level problems easily, (Remember these are easier and shorter (less logic steps)), Math Olympiad problems usually require quite a lot of steps to get there, sometimes requiring multiple strategies, since some won't work.
The only reason I could think is, perhaps they give more computational resource when trying benchmark.
These questions are handcrafted, and will not have a lot of information in the training data. But logically these are easy.
Example of Math puzzle
There are N identical black balls in a bag. I randomly take one ball out of the bag. If it is a black ball, I throw it away and put a white ball back into the bag instead. If it is a white ball, I simply throw it away and do not put anything back into the bag. The probability of getting any ball is the same.
Questions:
1. How many times will I need to reach into the bag to empty it?
2. What is the ratio of the expected maximum number of white balls in the bag to N in the limit as N goes to infinity?
| 2025-04-25T01:08:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k78m4e/how_come_llm_score_high_on_benchmark_tests_but_it/
|
Competitive-Anubis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k78m4e
| false | null |
t3_1k78m4e
|
/r/LocalLLaMA/comments/1k78m4e/how_come_llm_score_high_on_benchmark_tests_but_it/
| false | false |
self
| 0 | null |
Open source model for Cline
| 6 |
Which open source model are you people using with Cline or Continue.dev? Was using qwen2.5-coder-7b which was average and now have moved gemma-3-27b. Testing in progress.
| 2025-04-25T01:09:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1k78mus/open_source_model_for_cline/
|
dnivra26
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k78mus
| false | null |
t3_1k78mus
|
/r/LocalLLaMA/comments/1k78mus/open_source_model_for_cline/
| false | false |
self
| 6 | null |
How to make oobabooga and SillyTavern have autosummary and context shifting like KoboldAI
| 1 |
[removed]
| 2025-04-25T01:32:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1k792rk/how_to_make_oobabooga_and_sillytavern_have/
|
Wardensc5
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k792rk
| false | null |
t3_1k792rk
|
/r/LocalLLaMA/comments/1k792rk/how_to_make_oobabooga_and_sillytavern_have/
| false | false |
self
| 1 | null |
“Periodic table of machine learning” could fuel AI discovery | mit.edu
| 1 | 2025-04-25T01:37:59 |
https://news.mit.edu/2025/machine-learning-periodic-table-could-fuel-ai-discovery-0423
|
ttkciar
|
news.mit.edu
| 1970-01-01T00:00:00 | 0 |
{}
|
1k796g3
| false | null |
t3_1k796g3
|
/r/LocalLLaMA/comments/1k796g3/periodic_table_of_machine_learning_could_fuel_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '_b7x0GBm1oxc3z_joCtuNOir69Z9zkNVV7xrq9s9GZ4', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=108&crop=smart&auto=webp&s=7a578347cdf4650ccd8a876302929ab17569a764', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=216&crop=smart&auto=webp&s=757deee59bf560399703694cb4c214cc895a3f58', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=320&crop=smart&auto=webp&s=3887b28ff1d304c06b1c1ee2660b8b5fdc5518be', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=640&crop=smart&auto=webp&s=b0fca5f08a3c4b9e11e82c6cae42488c6cb8afd0', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=960&crop=smart&auto=webp&s=7f16de838f986c2e11e12aabca3c5de5d5a58ace', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?width=1080&crop=smart&auto=webp&s=afaa627675a613a87b9705bdec259320697758cd', 'width': 1080}], 'source': {'height': 1039, 'url': 'https://external-preview.redd.it/SH1H0S47iTzGkZ-vHNIW5pMiNhxDkCKsmJuYEyasBT8.jpg?auto=webp&s=d147474867b91941501d068229fe87c3b2321146', 'width': 1559}, 'variants': {}}]}
|
||
Here is my use case for LM studio.
| 0 |
I am currently working in a corporate environment, right? And I would like to.
git pull the request from the corporate master branch.
after that I would like to use LM studio to actually edit the content on the code.
Is this actually possible?
| 2025-04-25T01:47:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k79d1l/here_is_my_use_case_for_lm_studio/
|
Objective_Wonder7359
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k79d1l
| false | null |
t3_1k79d1l
|
/r/LocalLLaMA/comments/1k79d1l/here_is_my_use_case_for_lm_studio/
| false | false |
self
| 0 | null |
Created a website for modelling LLM throughput
| 1 |
[removed]
| 2025-04-25T03:04:21 |
https://www.reddit.com/gallery/1k7au06
|
Significant_Hat_5048
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7au06
| false | null |
t3_1k7au06
|
/r/LocalLLaMA/comments/1k7au06/created_a_website_for_modelling_llm_throughput/
| false | false | 1 | null |
|
Alternative to NotebookLM/Perplexity with Privacy
| 1 |
[removed]
| 2025-04-25T03:13:49 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7b070
| false | null |
t3_1k7b070
|
/r/LocalLLaMA/comments/1k7b070/alternative_to_notebooklmperplexity_with_privacy/
| false | false |
default
| 1 | null |
||
Alternative to NotebookLM/Perplexity with Privacy
| 1 |
[removed]
| 2025-04-25T03:14:58 |
https://github.com/MODSetter/SurfSense
|
Uiqueblhats
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7b0xi
| false | null |
t3_1k7b0xi
|
/r/LocalLLaMA/comments/1k7b0xi/alternative_to_notebooklmperplexity_with_privacy/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'E_EbySHYVLRQWRmmRw3cTyXji9AT1osU_5fNlGqzUkg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=108&crop=smart&auto=webp&s=cf7ea4f41497a7761aa6cc9e35ed022d776aa21d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=216&crop=smart&auto=webp&s=594621aadaffae1a12d2472c06f7735b7ec04c96', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=320&crop=smart&auto=webp&s=a5ce905c71e3a7a5f1e54afdbd4a1795e9fefd21', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=640&crop=smart&auto=webp&s=4cd00d6957395c9fb4b5ba7e29b9d27935cffc9d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=960&crop=smart&auto=webp&s=b9cc977f2c338bc067b9c9334a2ba60cdf528709', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?width=1080&crop=smart&auto=webp&s=7dcb5024da29b00d42255646de9d3fdaf5a493de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aIQlKnIjLHQVr-vGIwdvWc1cuPNqcAect_kOZKSQT4o.jpg?auto=webp&s=9bd3bb555694a6de9bb83beb56ee19a58725ea0b', 'width': 1200}, 'variants': {}}]}
|
|
Tina: Tiny Reasoning Models via LoRA
| 46 | 2025-04-25T03:15:19 |
https://huggingface.co/Tina-Yi
|
ninjasaid13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7b16b
| false | null |
t3_1k7b16b
|
/r/LocalLLaMA/comments/1k7b16b/tina_tiny_reasoning_models_via_lora/
| false | false | 46 |
{'enabled': False, 'images': [{'id': '2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=108&crop=smart&auto=webp&s=c6269b3f6fe340a1b9be55bf1bb83c531f2c9edc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=216&crop=smart&auto=webp&s=6cc2689d8a489339835bf362a55f343e25a7171b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=320&crop=smart&auto=webp&s=d740010b8d7c5cdfbfb567c84c935f9071674734', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=640&crop=smart&auto=webp&s=ed193f7053082f7c7ee06bae579be5b381f83a16', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=960&crop=smart&auto=webp&s=fdbd5b72678d2bf37a97ef759ac24c6608a3efdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?width=1080&crop=smart&auto=webp&s=f56a27199e2e95dc74ec172de4c252073fd9eb99', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2l5XveVVtO0RR14BTRaA53RvbMsjyO0Bvv0miba3ivc.png?auto=webp&s=6dadf5b13a5731d8fdae8cf5f8c7abd33cc9d09d', 'width': 1200}, 'variants': {}}]}
|
||
Developed a website for modelling LLM throughput
| 70 |
**You can simply copy and paste the model config from Hugging Face, and it will automatically extract the necessary information for calculations. It also supports Gated FFN and GQA to improve calculation accuracy.**
**Todo:**
* MoE
* Encoder-Decoder
I built this because the old Desmos version had several serious flaws, and many people complained it was hard to use. So I spent some time developing this website, hope it helps!
[https://slack-agent.github.io/LLM-Performance-Visualizer/](https://slack-agent.github.io/LLM-Performance-Visualizer/)
| 2025-04-25T03:21:57 |
https://www.reddit.com/gallery/1k7b5j6
|
Mindless_Pain1860
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7b5j6
| false | null |
t3_1k7b5j6
|
/r/LocalLLaMA/comments/1k7b5j6/developed_a_website_for_modelling_llm_throughput/
| false | false | 70 |
{'enabled': True, 'images': [{'id': '8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=108&crop=smart&auto=webp&s=9f9ddd2f2013418b83f7bb95052b657051ad9669', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=216&crop=smart&auto=webp&s=d91f26369abe88f97841eba4f999af60bbc4a2a3', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=320&crop=smart&auto=webp&s=d9246baed23acca584bdd6ef9315d8614441d501', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=640&crop=smart&auto=webp&s=36c88c2d31e0cd3fa47c34722489596c9f735a9d', 'width': 640}, {'height': 507, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=960&crop=smart&auto=webp&s=7a27de5c09f8307b57d854a788b5ab1b69967b0d', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?width=1080&crop=smart&auto=webp&s=df2b30a17185ccb2808bd69a20540285fd7ecf3f', 'width': 1080}], 'source': {'height': 938, 'url': 'https://external-preview.redd.it/8P3kKONxqp49lLZzilGLkIwcXiwOoVAslJpLelCDfEw.png?auto=webp&s=af74e254d7a8a2cabc040923f0427debd609aea4', 'width': 1775}, 'variants': {}}]}
|
|
llama4 Scout 31tok/sec on dual 3090 + P40
| 22 |
Testing out Unsloth's latest dynamic quants (Q4\_K\_XL) on 2x3090 and a P40. The P40 is a third the speed of the 3090s but still manages to get 31 tokens/second.
I normally run llama3.3 70B Q4\_K\_M with llama3.2 3B as a draft model. The same test is about 20tok/sec. So a 10tok/sec increase.
Power usage is about the same too, 420W, as the P40s limit the 3090s a bit.
I'll have to give llama4 a spin to see how it feels over llama3.3 for my use case.
Here's my llama-swap configs for the models:
```yaml
"llama-70B-dry-draft":
proxy: "http://127.0.0.1:9602"
cmd: >
/mnt/nvme/llama-server/llama-server-latest
--host 127.0.0.1 --port 9602 --flash-attn --metrics
--ctx-size 32000
--ctx-size-draft 32000
--cache-type-k q8_0 --cache-type-v q8_0
-ngl 99 -ngld 99
--draft-max 8 --draft-min 1 --draft-p-min 0.9 --device-draft CUDA2
--tensor-split 1,1,0,0
--model /mnt/nvme/models/Llama-3.3-70B-Instruct-Q4_K_M.gguf
--model-draft /mnt/nvme/models/Llama-3.2-3B-Instruct-Q4_K_M.gguf
--dry-multiplier 0.8
"llama4-scout":
env:
- "CUDA_VISIBLE_DEVICES=GPU-eb1,GPU-6f0,GPU-f10"
proxy: "http://127.0.0.1:9602"
cmd: >
/mnt/nvme/llama-server/llama-server-latest
--host 127.0.0.1 --port 9602 --flash-attn --metrics
--ctx-size 32000
--ctx-size-draft 32000
--cache-type-k q8_0 --cache-type-v q8_0
-ngl 99
--model /mnt/nvme/models/unsloth/llama-4/UD-Q4_K_XL/Llama-4-Scout-17B-16E-Instruct-UD-Q4_K_XL-00001-of-00002.gguf
--samplers "top_k;top_p;min_p;dry;temperature;typ_p;xtc"
--dry-multiplier 0.8
--temp 0.6
--min-p 0.01
--top-p 0.9
```
Thanks to the unsloth team for awesome quants and guides!
| 2025-04-25T05:00:28 |
https://v.redd.it/y9jothhnvwwe1
|
No-Statement-0001
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7cvjr
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/y9jothhnvwwe1/DASHPlaylist.mpd?a=1748149245%2CYmQyYjkzZTZiZWU4OTQxNWRlZjFhZDQ2MTk4MzNiOGIzYjZiZGI3MzBjMjQ0Y2NhZjdlMTg5NDc0M2QxN2U1Yg%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/y9jothhnvwwe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/y9jothhnvwwe1/HLSPlaylist.m3u8?a=1748149245%2CN2RiNmZlNWFmMjk5MTQ0ODU5YTk4M2FjOTA5ZjkxYmFhZGE0Y2E2MTgyMmQ5NmU5OTk4OTUxMjhkYTUxM2YyYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y9jothhnvwwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k7cvjr
|
/r/LocalLLaMA/comments/1k7cvjr/llama4_scout_31toksec_on_dual_3090_p40/
| false | false | 22 |
{'enabled': False, 'images': [{'id': 'cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=108&crop=smart&format=pjpg&auto=webp&s=2823958e1d0902028c2cfcd9e524481ed4a44573', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=216&crop=smart&format=pjpg&auto=webp&s=73dcd261d1659aef4d89a0d6c8a3798756690712', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=320&crop=smart&format=pjpg&auto=webp&s=40b8145cb9bd32398b2c61272cb43ce73af514fd', 'width': 320}, {'height': 512, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=640&crop=smart&format=pjpg&auto=webp&s=d1b6fc9d4362e07e3b0329a0543c6c5266385264', 'width': 640}, {'height': 769, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=960&crop=smart&format=pjpg&auto=webp&s=b3927b3f44af29f52a766e63b677c5a87f0566db', 'width': 960}, {'height': 865, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a1dab9599254d1819a5ac12168a1194e92e1cefd', 'width': 1080}], 'source': {'height': 1538, 'url': 'https://external-preview.redd.it/cjkzZmhqaG52d3dlMVXjerzhcH-b7Q4Q2hx5viTIB_FNu9CjHdNtLi7ZknFq.png?format=pjpg&auto=webp&s=7efb4709a28e9aa7e2159a0731331bfa1093a4f8', 'width': 1920}, 'variants': {}}]}
|
|
Anyone else using Tensordock and feel cheated?
| 6 |
After they have been acquired by Voltage Park, everything that was running before for this company broke down
I think they got acquired by a competitor and left for dead now
Server not running or not accessible
No customer supports! No one available on chat!
All your credits are not refundable. You also cannot use them to start new servers. The new servers are also either not running or not accessible
| 2025-04-25T05:05:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7cyoh/anyone_else_using_tensordock_and_feel_cheated/
|
CryLucky4944
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7cyoh
| false | null |
t3_1k7cyoh
|
/r/LocalLLaMA/comments/1k7cyoh/anyone_else_using_tensordock_and_feel_cheated/
| false | false |
self
| 6 | null |
EasyWhisperUI Now on macOS – Native Metal GPU Acceleration | Open Source Whisper Desktop App (Windows & Mac)
| 32 |
I'm happy to say my application EasyWhisperUI now has full **macOS** support thanks to an amazing contribution from [**u/celerycoloured**](https://github.com/celerycoloured), who ported it. Mac users, if you're looking for a free transcription application, I'd love to see your results.
[https://github.com/mehtabmahir/easy-whisper-ui](https://github.com/mehtabmahir/easy-whisper-ui)
# Major Update: macOS Support
Thanks to [**u/celerycoloured**](https://github.com/celerycoloured) who submitted a fantastic pull request, **EasyWhisper UI now runs natively on macOS** — with full **Metal API** GPU acceleration.
You can now transcribe using the power of your Mac’s GPU (Apple Silicon supported).
Huge credit to celerycoloured for:
* Porting the UI to macOS
* Using `QDesktopServices` for file opening
* Adding a macOS app bundle builder with Whisper compiled inside
* Handling paths cleanly across platforms [Pull Request #6](https://github.com/mehtabmahir/easy-whisper-ui/pull/6)
# Features
* macOS support (M1, M2, M3 — all Apple Silicon)
* Windows 10/11 support
* GPU acceleration via Vulkan (Windows) and Metal (macOS)
* Batch processing — drag in multiple files or use "Open With" on many at once
* Fully C++ and Qt — no Python, no scripting
* Auto-converts to `.mp3` if needed using FFmpeg
* Dropdowns to pick model and language
* Additional arguments textbox for Whisper advanced settings
* Automatically downloads missing models
* Real-time console output
* Choose `.txt` or `.srt` output (with timestamps)
# Requirements
* Windows 10/11 with VulkanSDK support (almost all modern systems)
* macOS (Apple Silicon: M1, M2, M3)
It’s completely free to use.
# Credits
* Whisper engine: [whisper.cpp](https://github.com/ggerganov/whisper.cpp) by Georgi Gerganov
* FFmpeg builds by [Gyan.dev](https://www.gyan.dev/ffmpeg/)
* Built with Qt
* Installer built with Inno Setup
* macOS port by [u/celerycoloured](https://github.com/celerycoloured)
If you want a simple, native, fast Whisper app for both Windows and macOS without needing to deal with Python or scripts, give EasyWhisper UI a try. It’s completely free.
| 2025-04-25T05:26:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7dar7/easywhisperui_now_on_macos_native_metal_gpu/
|
mehtabmahir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7dar7
| false | null |
t3_1k7dar7
|
/r/LocalLLaMA/comments/1k7dar7/easywhisperui_now_on_macos_native_metal_gpu/
| false | false |
self
| 32 |
{'enabled': False, 'images': [{'id': 'zg2J8fkYctSx08XzN4ZM7Gfo3LpN6WODPNk4vPOX2-I', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LmdcryGR0POEEgR2Drix2bSoUle7IDnXjp9oUQt4PKg.jpg?width=108&crop=smart&auto=webp&s=92ee6a448155f0d9b1c64f1fa045994e5c3a4205', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/LmdcryGR0POEEgR2Drix2bSoUle7IDnXjp9oUQt4PKg.jpg?width=216&crop=smart&auto=webp&s=6d49eb1124a938d0ef0d88266bb22bc63e28e0b9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/LmdcryGR0POEEgR2Drix2bSoUle7IDnXjp9oUQt4PKg.jpg?width=320&crop=smart&auto=webp&s=8e161428f6866288870bc9def1fed729cb3e6187', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/LmdcryGR0POEEgR2Drix2bSoUle7IDnXjp9oUQt4PKg.jpg?auto=webp&s=6e967d3ff5391b2962d1dd36d006b9006d40a343', 'width': 460}, 'variants': {}}]}
|
Best Model with 45/50GB of RAM
| 1 |
[removed]
| 2025-04-25T05:35:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7dfa9/best_model_with_4550gb_of_ram/
|
Business_Kiwi3098
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7dfa9
| false | null |
t3_1k7dfa9
|
/r/LocalLLaMA/comments/1k7dfa9/best_model_with_4550gb_of_ram/
| false | false |
self
| 1 | null |
Google Colab T4 GPU: ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?)
| 0 |
I am trying to run the OCR of Qwen following this tutorial: [https://github.com/QwenLM/Qwen2.5-VL/blob/main/cookbooks/ocr.ipynb](https://github.com/QwenLM/Qwen2.5-VL/blob/main/cookbooks/ocr.ipynb)
This is the Google Colab: [https://colab.research.google.com/drive/1JR1Abv9ORIQZWcjm5-xdFM4zJo6hdp51?usp=sharing](https://colab.research.google.com/drive/1JR1Abv9ORIQZWcjm5-xdFM4zJo6hdp51?usp=sharing)
| 2025-04-25T05:59:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7dsao/google_colab_t4_gpu_valueerror_pointer_argument/
|
Sudden_Breakfast_358
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7dsao
| false | null |
t3_1k7dsao
|
/r/LocalLLaMA/comments/1k7dsao/google_colab_t4_gpu_valueerror_pointer_argument/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'qEVemCuU4NQl8o1bD5numZlpk1b1tL2dNW3nEhL-TkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=108&crop=smart&auto=webp&s=3ce500192cdb151a58c63df9a3471840886f48b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=216&crop=smart&auto=webp&s=8c8788991b4b24d29457f1930b8e3f5215c228e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=320&crop=smart&auto=webp&s=0d9111ef1a2b12895c4d63013fd07f3fe8c2d7ae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=640&crop=smart&auto=webp&s=68a8379214a861da3b78102d4e56f23f03f2f30a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=960&crop=smart&auto=webp&s=89aac2af1d0254f40749e531f05c9b7dc125c8e7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?width=1080&crop=smart&auto=webp&s=c846540106b32d920b7639bc40587ecca2406fd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h4tKysDwEJ_FFQFoCkqOH6lRIuaLBdP5CdfiJUrwHjg.jpg?auto=webp&s=c77414c8dac75c048c9c80a4a450ad7773bb84fe', 'width': 1200}, 'variants': {}}]}
|
7B Reasoning Rust Coding Model with Open Dataset
| 142 | 2025-04-25T06:22:07 |
https://huggingface.co/Tesslate/Tessa-Rust-T1-7B-Q8_0-GGUF
|
United-Rush4073
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7e542
| false | null |
t3_1k7e542
|
/r/LocalLLaMA/comments/1k7e542/7b_reasoning_rust_coding_model_with_open_dataset/
| false | false | 142 |
{'enabled': False, 'images': [{'id': 'SXLmtN_7Y6bQYRBkjivlsyIOV7CNGeemoKGaiU_B_SM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=108&crop=smart&auto=webp&s=42175229d65bb85b8b1b07404c8e9930fe52ac07', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=216&crop=smart&auto=webp&s=f4b7f93a75634cbdf194345e3d1c5a5979eddd35', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=320&crop=smart&auto=webp&s=db65254174dc032ec04abc9b0ffd2a6759f7c90f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=640&crop=smart&auto=webp&s=314df762ea670ace7afe4fd1f6277bc8c4f4c048', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=960&crop=smart&auto=webp&s=4a1d2fdc730b4fa2064186cacb42cf770aeeedbe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?width=1080&crop=smart&auto=webp&s=c2e9dd3895fe31315fd035f2c8571c06403b4719', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p7L3vw8UA3QYYsIQPN70mTI04OM5s45JyiPaERXOxBg.jpg?auto=webp&s=f5b4cf789c7e575aaed2a0ddd15208e3b862ad8c', 'width': 1200}, 'variants': {}}]}
|
||
Best Model with 45/50GB of RAM
| 1 |
[removed]
| 2025-04-25T06:32:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7eacc/best_model_with_4550gb_of_ram/
|
Business_Kiwi3098
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7eacc
| false | null |
t3_1k7eacc
|
/r/LocalLLaMA/comments/1k7eacc/best_model_with_4550gb_of_ram/
| false | false |
self
| 1 | null |
Best Model with 45/50G of RAM
| 1 |
[removed]
| 2025-04-25T06:51:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7ekjb/best_model_with_4550g_of_ram/
|
Business_Kiwi3098
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ekjb
| false | null |
t3_1k7ekjb
|
/r/LocalLLaMA/comments/1k7ekjb/best_model_with_4550g_of_ram/
| false | false |
self
| 1 | null |
UpHill Conf Bern, anyone from localllama here?
| 1 |
[removed]
| 2025-04-25T07:05:33 |
_underlines_
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7erpx
| false | null |
t3_1k7erpx
|
/r/LocalLLaMA/comments/1k7erpx/uphill_conf_bern_anyone_from_localllama_here/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ZKDQGD58mmRAg7gAs7wTj7yixFf1rIneyLst-TEhH98', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=108&crop=smart&auto=webp&s=34fc945c18c6f0f8fc0227d723f92349f53c80d3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=216&crop=smart&auto=webp&s=32efc9db7ee3b2bbbaf58a7dd70673b07a917129', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=320&crop=smart&auto=webp&s=877d8dc13c6def9110cb6d8ba7de2c88f662c7cb', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=640&crop=smart&auto=webp&s=884418f54fcaaae461f452f30ac88620220b3e57', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=960&crop=smart&auto=webp&s=6984169bd675805dabbb83497c8b4dd360193694', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?width=1080&crop=smart&auto=webp&s=fc67a3e6767bf8aabf3982549da29184bc46cb5c', 'width': 1080}], 'source': {'height': 3472, 'url': 'https://preview.redd.it/32s74zxyjxwe1.jpeg?auto=webp&s=a7b85aa618cad86d3509d01b5cf0b23800380028', 'width': 4624}, 'variants': {}}]}
|
||
AI Science Fair 2025 Extended Video Demo
| 6 |
AI Science Fair tests show that the LLMAgent has narrow visibility into the Science Fair Agent data store. In case anyone is interested.
| 2025-04-25T07:20:07 |
Financial_Pick8394
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7eyrb
| false | null |
t3_1k7eyrb
|
/r/LocalLLaMA/comments/1k7eyrb/ai_science_fair_2025_extended_video_demo/
| false | false |
default
| 6 |
{'enabled': True, 'images': [{'id': 'wifjju1nlxwe1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=108&crop=smart&format=png8&s=29d40e68d80cb87abac1eb1969908c5bb6a38412', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=216&crop=smart&format=png8&s=f05314115d36a205ebe41f72e918fb3e156bb8cd', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=320&crop=smart&format=png8&s=f83ea4ac4f5e799461e988c1926183c85facdaab', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=640&crop=smart&format=png8&s=1904f0c0f8d47eb5a02eb590758f0d77f8452507', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=960&crop=smart&format=png8&s=6bb7630a929d46f58aad0545e475b5e99a2f55ff', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=1080&crop=smart&format=png8&s=02787cb556451be51fd4b69f4c29d2658a272345', 'width': 1080}], 'source': {'height': 766, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?format=png8&s=c4af4b976f2266c8216a330e89be740cb3254614', 'width': 1366}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=108&crop=smart&s=f913e6959ea711132838ce19ed626a69c4a231aa', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=216&crop=smart&s=0a02a26fcfcf31779f14a47de963ef1da3de4a21', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=320&crop=smart&s=50d62f448108548143ab95d59ce27b641acc4be2', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=640&crop=smart&s=82015a346375eb36f234e6f40cee058d125ba024', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=960&crop=smart&s=78aa27941619064675b63d8d2b54cac6bcb6a6f5', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=1080&crop=smart&s=baab76e0e6581d8b64fb76bd191662fdf9577460', 'width': 1080}], 'source': {'height': 766, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?s=4cb55358cdb33c1d9ce5d4be6904fa88ab885b8d', 'width': 1366}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=108&format=mp4&s=ae3571993244bcafcff1003c861ec04ae779ef15', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=216&format=mp4&s=1294d9d1557e2704f688e6fc94c7d2ea843f282e', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=320&format=mp4&s=c9f1de9c5001591ca96b6df3784b3490003858c6', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=640&format=mp4&s=c43381eba11cda1ad28d1f29ebeb6e3ae16f6a4b', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=960&format=mp4&s=b9b462336d41c2356a0b4026222372f6393cc7d7', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?width=1080&format=mp4&s=6afcdec8aac56a0f870aafa40c92e7e9e2bc57a1', 'width': 1080}], 'source': {'height': 766, 'url': 'https://preview.redd.it/wifjju1nlxwe1.gif?format=mp4&s=9cc1b70b8d57dfd08151039769ed06679bd3f86e', 'width': 1366}}}}]}
|
|
What about confidentiality when using an inference provider ?
| 1 |
[removed]
| 2025-04-25T07:33:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7f5in/what_about_confidentiality_when_using_an/
|
enzo_ghll
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7f5in
| false | null |
t3_1k7f5in
|
/r/LocalLLaMA/comments/1k7f5in/what_about_confidentiality_when_using_an/
| false | false |
self
| 1 | null |
Llama4 scout 17B 16E model not able to fit in single H100 with int4 Quantization
| 1 |
[removed]
| 2025-04-25T07:41:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7f9kj/llama4_scout_17b_16e_model_not_able_to_fit_in/
|
Haunting-Young6488
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7f9kj
| false | null |
t3_1k7f9kj
|
/r/LocalLLaMA/comments/1k7f9kj/llama4_scout_17b_16e_model_not_able_to_fit_in/
| false | false |
self
| 1 | null |
Concerned about economical feasibility of LLMs: Are we about to see enshittification of them? (Price hikes, smaller models for paying users)
| 22 |
LLM inference is highly expensive, which is why OpenAI loses money giving users on the Pro plan unlimited access to its models, despite the $200/month price tag.
I enjoy using ChatGPT, Gemini, and Claude as a programmer, but am becoming increasingly concerned at the inability to extract profits from them. I don't worry about their executives and their wealth, of course, but being unprofitable means price hikes could be heading our way.
I'm worried because investments (OpenAI) or loss leading (Google) are unsustainable long-term, and so we might see massive increases in inference costs (both API and UI monthly subscription) in the coming years, and/or less access to high-parameter count models like o3 and Gemini 2.5 Pro.
I can't see how this won't happen, except for a breakthrough in GPU/TPU architectures increasing FLOPS by a few orders of magnitude, and/or a move from the Transformer architecture to something else that'll be more efficient.
What do you guys think?
| 2025-04-25T07:43:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7faed/concerned_about_economical_feasibility_of_llms/
|
Endonium
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7faed
| false | null |
t3_1k7faed
|
/r/LocalLLaMA/comments/1k7faed/concerned_about_economical_feasibility_of_llms/
| false | false |
self
| 22 | null |
How familiar are you with Docker?
| 0 |
[View Poll](https://www.reddit.com/poll/1k7fc38)
| 2025-04-25T07:47:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7fc38/how_familiar_are_you_with_docker/
|
okaris
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7fc38
| false | null |
t3_1k7fc38
|
/r/LocalLLaMA/comments/1k7fc38/how_familiar_are_you_with_docker/
| false | false |
self
| 0 | null |
Best pipeline for Emotionally accurate Movie dubbing in 2025
| 1 |
[removed]
| 2025-04-25T08:16:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7fq9k/best_pipeline_for_emotionally_accurate_movie/
|
Few_Interview_3030
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7fq9k
| false | null |
t3_1k7fq9k
|
/r/LocalLLaMA/comments/1k7fq9k/best_pipeline_for_emotionally_accurate_movie/
| false | false |
self
| 1 | null |
Best Model with 45/50GB of RAM
| 1 |
[removed]
| 2025-04-25T09:14:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7giqz/best_model_with_4550gb_of_ram/
|
Business_Kiwi3098
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7giqz
| false | null |
t3_1k7giqz
|
/r/LocalLLaMA/comments/1k7giqz/best_model_with_4550gb_of_ram/
| false | false |
self
| 1 | null |
Backend driven frontend automation seems better than Vibe Coding
| 1 |
[removed]
| 2025-04-25T09:54:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7h3is/backend_driven_frontend_automation_seems_better/
|
jhnam88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7h3is
| false | null |
t3_1k7h3is
|
/r/LocalLLaMA/comments/1k7h3is/backend_driven_frontend_automation_seems_better/
| false | false |
self
| 1 | null |
Playing around with local AI using Svelte, Ollama, and Tauri
| 7 | 2025-04-25T10:03:25 |
https://v.redd.it/1gxttlsteywe1
|
HugoDzz
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7h8i5
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1gxttlsteywe1/DASHPlaylist.mpd?a=1748167422%2CNjE4M2IyOGQ3ZjA1ZmYxZmU3NTFhZmNjNWYwMjFhNGMwYTAxOWEyNmQzNTZjZWRhMmQ5Y2U4YjUzZjU2OTJlMA%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/1gxttlsteywe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/1gxttlsteywe1/HLSPlaylist.m3u8?a=1748167422%2CMzJmNTQyOTk2YWJjYmI2ZmIxNGQ1MzMxMGQwMGUzNThmYTVjNTAxMzBhNWZkYzE4YTFkNGEzM2M5ZTYzZThjMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1gxttlsteywe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1544}}
|
t3_1k7h8i5
|
/r/LocalLLaMA/comments/1k7h8i5/playing_around_with_local_ai_using_svelte_ollama/
| false | false | 7 |
{'enabled': False, 'images': [{'id': 'eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=108&crop=smart&format=pjpg&auto=webp&s=3338d17d8e92b7aa855b358be9b93ad59828c66f', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=216&crop=smart&format=pjpg&auto=webp&s=d7dd8ff2772c56e3d6d18efc5dccd8a8821d5425', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=320&crop=smart&format=pjpg&auto=webp&s=e59809fd9073a333c563d28a1c8d0b04cfb5a1e9', 'width': 320}, {'height': 447, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=640&crop=smart&format=pjpg&auto=webp&s=8d8b82757fe3f7f3c82d0471e6a4894e1dd8ed0d', 'width': 640}, {'height': 671, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=960&crop=smart&format=pjpg&auto=webp&s=311f19a49f5a09e0019916a12bd0076b8a077d94', 'width': 960}, {'height': 755, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?width=1080&crop=smart&format=pjpg&auto=webp&s=76a654a731996d59ab759882a5bed590234d9461', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/eXo0dThrc3RleXdlMYJ7TjwC-MuNs3Q8b2AUuM3aKkqtmo6GeWDrUfPUdO1F.png?format=pjpg&auto=webp&s=de791173462f5ba8a7214da2b22a449ae7dd996d', 'width': 3088}, 'variants': {}}]}
|
||
Seeking modestly light/small instruct model for mid-tier pc
| 0 |
Seeking an instruct all around model for local llm using LM studio. I have RTX 5070 and AMD 7700x CPU, 64 GB of RAM.
Use case: General AI prompting, some RAG with small text files to coagulate general knowledge throughout my working career personally
Max parameters: Prefer 8-14b max, my PC can't handle much
Currently using Phi-4-Q4-K\_M.gguf
| 2025-04-25T11:07:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7i8jz/seeking_modestly_lightsmall_instruct_model_for/
|
intimate_sniffer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7i8jz
| false | null |
t3_1k7i8jz
|
/r/LocalLLaMA/comments/1k7i8jz/seeking_modestly_lightsmall_instruct_model_for/
| false | false |
self
| 0 | null |
olmOCR-7B-faithful by TNG, a fine-tuned version of olmOCR-7B-0225-preview
| 32 |
A fine-tuned version of olmOCR-7B-0225-preview that aims to extract _all_ information from documents, including header and footer information.
Release article: https://huggingface.co/blog/tngtech/finetuning-olmocr-to-be-a-faithful-ocr-engine
| 2025-04-25T11:14:56 |
https://huggingface.co/tngtech/olmOCR-7B-faithful
|
hdmcndog
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ictf
| false | null |
t3_1k7ictf
|
/r/LocalLLaMA/comments/1k7ictf/olmocr7bfaithful_by_tng_a_finetuned_version_of/
| false | false | 32 |
{'enabled': False, 'images': [{'id': '4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=108&crop=smart&auto=webp&s=3355921ab9add8981a930cf21f75e94c39e19cbb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=216&crop=smart&auto=webp&s=91bb8c1566b2698bd5de1d09e22dffba14b273ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=320&crop=smart&auto=webp&s=de59575948756d85ae61a9ecc48dba8bcc17aee7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=640&crop=smart&auto=webp&s=0c7c740e1d09161f2481a17c5befe38bca7b30de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=960&crop=smart&auto=webp&s=74b55bf543dfb3dd2bc3b0205a49ae3a20f54d64', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?width=1080&crop=smart&auto=webp&s=f1329ffa384c46c789f2bbd2af281234fd9d29f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4RZYg5749xg-1AffmqgsNSXXr7Iuj60lXffvU2kpcPo.png?auto=webp&s=481d9d6794a45375253db0462be7bdf027698e6f', 'width': 1200}, 'variants': {}}]}
|
|
Modular have come a long way in just 3 years
| 32 |
In their latest presentation, they talk about how they now have support for CPU (x86 & ARM since 2023) and NVIDIA & AMD GPU's (I believe that it is currently optimized for A100, H100 & MI300X. There might be more, but those are the models that I have seen mentioned).
They have already open sourced some of their code and will soon release ~250k lines of GPU kernel code, and we will soon get to know how the Python operability is getting along to.
They have a new simpler license for Mojo and MAX.
Presentation (unfortunately bad audio):
https://www.youtube.com/live/uul6hZ5NXC8
Article from EE Times:
https://www.eetimes.com/after-three-years-modulars-cuda-alternative-is-ready/
| 2025-04-25T11:55:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7j2h5/modular_have_come_a_long_way_in_just_3_years/
|
Cane_P
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7j2h5
| false | null |
t3_1k7j2h5
|
/r/LocalLLaMA/comments/1k7j2h5/modular_have_come_a_long_way_in_just_3_years/
| false | false |
self
| 32 | null |
LM Studio doesn't support image to text?
| 0 |
LM Studio appears to have a paste option with a paperclip icon, but even with a model like https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503, it indicates that the model "doesn't support image to text" even though it explicitly says it on huggingface.
| 2025-04-25T12:01:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7j6w7/lm_studio_doesnt_support_image_to_text/
|
intimate_sniffer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7j6w7
| false | null |
t3_1k7j6w7
|
/r/LocalLLaMA/comments/1k7j6w7/lm_studio_doesnt_support_image_to_text/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=108&crop=smart&auto=webp&s=cf614cf3c6755f0cd3aa9f649b2afe75af55e27a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=216&crop=smart&auto=webp&s=a84b0c5cf654afb18e93493f8b9cba95663af0f7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=320&crop=smart&auto=webp&s=156169b4229382f0a61bbce72b48d1e53c9d4d4a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=640&crop=smart&auto=webp&s=f97e803e8119249e96ab081561dba7565ed27918', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=960&crop=smart&auto=webp&s=e21489fdd447dd1e55e6cd37db4efb0b3cff376e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?width=1080&crop=smart&auto=webp&s=40440477fe40a3945e432b5b3cbc219c69e23feb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Rl59-92ULDaEkG4w14celKUA7TiXd7KqnM3lZklnbqg.png?auto=webp&s=caa936aa93ced0b9b78450308d0c92b9686b5b75', 'width': 1200}, 'variants': {}}]}
|
Give Your Local LLM Superpowers! 🚀 New Guide to Open WebUI Tools
| 1 |
[removed]
| 2025-04-25T12:24:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7jmb5/give_your_local_llm_superpowers_new_guide_to_open/
|
PeterHash
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7jmb5
| false | null |
t3_1k7jmb5
|
/r/LocalLLaMA/comments/1k7jmb5/give_your_local_llm_superpowers_new_guide_to_open/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=108&crop=smart&auto=webp&s=9903e1b1daa5ccb1eb96c75e7b24e5469a55e3b3', 'width': 108}, {'height': 100, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=216&crop=smart&auto=webp&s=bbb973719dea853ab175695b33d9d9f52e79e0f2', 'width': 216}, {'height': 148, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=320&crop=smart&auto=webp&s=3daa4c7bd24ce9e5df99e36201bbdfbcd2691830', 'width': 320}, {'height': 297, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=640&crop=smart&auto=webp&s=f0b9e6301e7648819511977f10dcb395a2bb6690', 'width': 640}, {'height': 446, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=960&crop=smart&auto=webp&s=b55ceb712b4ca074630c203191f88353ee6c082b', 'width': 960}, {'height': 502, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?width=1080&crop=smart&auto=webp&s=0b1692c588de61d1be1df25052ddf9036b993cd1', 'width': 1080}], 'source': {'height': 558, 'url': 'https://external-preview.redd.it/2sL09NCheoWx3E0p0oaa70-uUwUZAZEG4sJKR984s-I.png?auto=webp&s=a87baf3282121535844d24b770631f5c3aba13c0', 'width': 1200}, 'variants': {}}]}
|
No thinking, is the right way to think?
| 141 |
[https://arxiv.org/abs/2504.09858](https://arxiv.org/abs/2504.09858)
TLDR:
Bypassing the thinking process, forcing the beginning of the answer by "Thinking: Okay, I think I have finished thinking" (lol), they get similar/better inference results !!!
| 2025-04-25T12:45:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7k1ck/no_thinking_is_the_right_way_to_think/
|
Eralyon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7k1ck
| false | null |
t3_1k7k1ck
|
/r/LocalLLaMA/comments/1k7k1ck/no_thinking_is_the_right_way_to_think/
| false | false |
self
| 141 | null |
Gemma 3 cannot be found or downloaded into LM Studio?
| 0 |
Never seen this error.... I'm trying to retrieve Gemma 3 model that has image to text, but LM Studio cannot obtain this 1 model. IDk why? It's on HF: [https://huggingface.co/google/gemma-3-12b-it-qat-q4\_0-gguf](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf)
| 2025-04-25T12:58:29 |
intimate_sniffer69
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7kats
| false | null |
t3_1k7kats
|
/r/LocalLLaMA/comments/1k7kats/gemma_3_cannot_be_found_or_downloaded_into_lm/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '-zMUZfSvNc3Zmjt15Jgy69NAeIZIWmMKQxRIUHO8rR8', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/s58dvw14bzwe1.png?width=108&crop=smart&auto=webp&s=32e2dc6a552b87188a17fdc1c6aaf3fd2de175e7', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/s58dvw14bzwe1.png?width=216&crop=smart&auto=webp&s=07f2be170052c6115c3910417599f302726fe5cf', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/s58dvw14bzwe1.png?width=320&crop=smart&auto=webp&s=9622bed28e1d2c9d0fb79627952501211944c038', 'width': 320}], 'source': {'height': 152, 'url': 'https://preview.redd.it/s58dvw14bzwe1.png?auto=webp&s=9f5ac81daf7b623664bbcd80566d7e34206e5804', 'width': 484}, 'variants': {}}]}
|
||
Cline tool usage on RTX 4060ti 16GB VRAM
| 0 |
https://huggingface.co/bartowski/mistralai_Mistral-Small-3.1-24B-Instruct-2503-GGUF
This model is the only one that I found used Cline’s replace_in_file tool successfully.
I used LM Studio server
IQ3_XXS
~90k context length
Full GPU offload
Flash attention enabled
K and V cache set to Q4_0
| 2025-04-25T12:59:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7kbap/cline_tool_usage_on_rtx_4060ti_16gb_vram/
|
sosuke
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7kbap
| false | null |
t3_1k7kbap
|
/r/LocalLLaMA/comments/1k7kbap/cline_tool_usage_on_rtx_4060ti_16gb_vram/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '5BsmRcxYkZFQBY2VUHRIVLxvu7bqpZv01JRjCB2H6h4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=108&crop=smart&auto=webp&s=df59e540a0d7460587ac3acab440deb48d1ec1ab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=216&crop=smart&auto=webp&s=4d1623f8ba35a701b32ad3c894e21a86cf7e3b4c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=320&crop=smart&auto=webp&s=31d76858eece5193dbbde229bcd72d2f947c404f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=640&crop=smart&auto=webp&s=eb0b49ba8b75bc25ad32bc70832d180991a32b83', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=960&crop=smart&auto=webp&s=e7dc03974d29de7eba21a4b3a9e9f3e49a5749bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?width=1080&crop=smart&auto=webp&s=7da13031790e246c87dc25042f2e525d78c11e1d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qIP34D5Kgw5BLEVvY0mA1gQFySaYFrQ266IHHsWePgg.jpg?auto=webp&s=bfe861fc2fa7eaec3acd5429e64582c40c2120ce', 'width': 1200}, 'variants': {}}]}
|
Gemma 3 fakes (and ignores) the system prompt
| 282 |
The screenshot shows what Gemma 3 said when I pointed out that it wasn't following its system prompt properly. "Who reads the fine print? 😉" - really, seriously, WTF?
At first I thought it may be an issue with the format/quant, an inference engine bug or just my settings or prompt. But digging deeper, I realized I had been fooled: While the \[Gemma 3 chat template\](https://huggingface.co/google/gemma-3-27b-it/blob/main/chat\_template.json) \*does\* support a system role, all it \*really\* does is dump the system prompt into the first user message. That's both ugly \*and\* unreliable - doesn't even use any special tokens, so there's no way for the model to differentiate between what the system (platform/dev) specified as general instructions and what the (possibly untrusted) user said. 🙈
Sure, the model still follows instructions like any other user input - but it never learned to treat them as higher-level system rules, so they're basically "optional", which is why it ignored mine like "fine print". That makes Gemma 3 utterly unreliable - so I'm switching to Mistral Small 3.1 24B Instruct 2503 which has proper system prompt support.
Hopefully Google will provide \*real\* system prompt support in Gemma 4 - or the community will deliver a better finetune in the meantime. For now, I'm hoping Mistral's vision capability gets wider support, since that's one feature I'll miss from Gemma.
| 2025-04-25T13:20:27 |
WolframRavenwolf
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7krlm
| false | null |
t3_1k7krlm
|
/r/LocalLLaMA/comments/1k7krlm/gemma_3_fakes_and_ignores_the_system_prompt/
| false | false | 282 |
{'enabled': True, 'images': [{'id': 'RdrHFVuL75v4cJHnc4ECNEt5TcowjmC7JsUychHLgoY', 'resolutions': [{'height': 13, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=108&crop=smart&auto=webp&s=a0f6a7cf88ce58709d542e40906f4eca2e25fba6', 'width': 108}, {'height': 27, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=216&crop=smart&auto=webp&s=d253c5ae600a50ca9a0ea13c8f8834a63a284261', 'width': 216}, {'height': 41, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=320&crop=smart&auto=webp&s=d81a08d58cefaa25c483245688ad29b369ef0fb4', 'width': 320}, {'height': 82, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=640&crop=smart&auto=webp&s=8fba119d92fca9059223ac136a22602c0f3b43b8', 'width': 640}, {'height': 123, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=960&crop=smart&auto=webp&s=600230244d19bbc0da36e15e9d8bf3487fc19710', 'width': 960}, {'height': 138, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?width=1080&crop=smart&auto=webp&s=b38283f72e4db1151854a76680fdf31ab60ba0c8', 'width': 1080}], 'source': {'height': 170, 'url': 'https://preview.redd.it/xuycbwnk4zwe1.jpeg?auto=webp&s=f7b1046d7b400fed676426dbb9cea1131e1bc951', 'width': 1326}, 'variants': {}}]}
|
||
Intel Updates Its PyTorch Extension With DeepSeek-R1 Support, New Optimizations
| 69 | 2025-04-25T13:25:15 |
https://www.phoronix.com/news/Intel-PyTorch-Extension-2.7
|
FastDecode1
|
phoronix.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7kv9a
| false | null |
t3_1k7kv9a
|
/r/LocalLLaMA/comments/1k7kv9a/intel_updates_its_pytorch_extension_with/
| false | false | 69 |
{'enabled': False, 'images': [{'id': 'F5oUFUYzd9d4D9kJnwukgpeNQLScaXVNGWHi044odWU', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=108&crop=smart&auto=webp&s=d9348d5cfd4b32c946512182295da847b136c2db', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=216&crop=smart&auto=webp&s=3e141f3487760d0d450e6c9d66746d2649321e28', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=320&crop=smart&auto=webp&s=24f35c75df2a9bfaadca94b7072b3f8f755f2705', 'width': 320}, {'height': 354, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=640&crop=smart&auto=webp&s=73fdf48b98f8c4bbf03db30badce672add745943', 'width': 640}, {'height': 532, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=960&crop=smart&auto=webp&s=876555a06a120ce7a05b98fbe2bc329bf9447e9d', 'width': 960}, {'height': 598, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?width=1080&crop=smart&auto=webp&s=f29040d4ca8209915f123e239fc0f94a838f51b0', 'width': 1080}], 'source': {'height': 1064, 'url': 'https://external-preview.redd.it/yTiUURrBkqcGYJGBhqzC01YOstzVvXfVd3FxAo3YWYU.jpg?auto=webp&s=369b8d0db10b77d44a529734f344ba618bdf945b', 'width': 1920}, 'variants': {}}]}
|
||
How do I get a remote international internship as an African?
| 1 |
[removed]
| 2025-04-25T14:07:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7ltrb/how_do_i_get_a_remote_international_internship_as/
|
Unfair-Turnip547
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ltrb
| false | null |
t3_1k7ltrb
|
/r/LocalLLaMA/comments/1k7ltrb/how_do_i_get_a_remote_international_internship_as/
| false | false |
self
| 1 | null |
LLM recommendations for a bot?
| 1 |
[removed]
| 2025-04-25T14:10:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7lwjj/llm_recommendations_for_a_bot/
|
RobTheDude_OG
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7lwjj
| false | null |
t3_1k7lwjj
|
/r/LocalLLaMA/comments/1k7lwjj/llm_recommendations_for_a_bot/
| false | false |
self
| 1 | null |
What tools are you using to manage a shared enterprise prompt library?
| 7 |
I'm looking for ways to manage a shared prompt library across multiple business groups within an enterprise.
Ideally, teams should be able to:
* Author and organize prompts (with tagging or folder structures)
* Share prompts across departments (og yahoo-style categorization)
* Leave comments or suggest edits
* View version history and changes
* Use prompts in web chat or assistant-style UI interfaces
* (Optionally) link prompts to systems like **Jira** or **Confluence** :P
* (Optionally) prompt performance benchmarking
The end users are mostly internal employees using prompts to interact with LLMs for things like task triage, summarization, and report generation. End users work in sales, marketing or engineering.
I may be describing a \~platform here but am interested in whatever tooling (internal or external) folks here are using—whether it’s a full platform, lightweight markdown in gists or snippets, or something else entirely.
| 2025-04-25T14:16:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7m1km/what_tools_are_you_using_to_manage_a_shared/
|
jetsetter
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7m1km
| false | null |
t3_1k7m1km
|
/r/LocalLLaMA/comments/1k7m1km/what_tools_are_you_using_to_manage_a_shared/
| false | false |
self
| 7 | null |
Local AI has become my latest ADHD hyperfixation and this sub is my catnip.
| 1 |
[removed]
| 2025-04-25T14:24:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7m8i3/local_ai_has_become_my_latest_adhd_hyperfixation/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7m8i3
| false | null |
t3_1k7m8i3
|
/r/LocalLLaMA/comments/1k7m8i3/local_ai_has_become_my_latest_adhd_hyperfixation/
| false | false |
self
| 1 | null |
Further explorations of 3090 idle power.
| 7 |
Following on from my post: https://www.reddit.com/r/LocalLLaMA/comments/1k2fb67/save_13w_of_idle_power_on_your_3090/
I started to investigate further:
* On an VM that was upgraded, I wasn't able to get idle power down, there were maybe too many things that was preventing GPU from going idle, so I started from a clean slate which worked
* There were many strange interactions. I noticed that when starting an program on one GPU, it kicked another unrelated GPU out of its low idle power state.
* using nvidia-smi to reset the GPU restors low idle power after whatever breaks the low idle power
I now replaced my P102-100 idling at 7W (which I used purely for low idle power) with my 3090 as now I can get that to idle at 9W.
I will do some longer term testing to see if it maintains this.
I also found that my newly compiled version of llama.cpp breaks idle power.
The older one I built at commit 6152129d05870cb38162c422c6ba80434e021e9f with CUDA 12.3 maintains idle power.
Building current version with CUDA 12.8 has poor idle power characteristics.
| 2025-04-25T14:25:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7m902/further_explorations_of_3090_idle_power/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7m902
| false | null |
t3_1k7m902
|
/r/LocalLLaMA/comments/1k7m902/further_explorations_of_3090_idle_power/
| false | false |
self
| 7 | null |
Whats the best OCR Workflow right now?
| 10 |
I want to scan a few documents I got. Feeding it into something like AIStudio gives good results but sometimes also a few hallucinations. Is there any tool that perhaps can detect mistakes or something like that?
| 2025-04-25T14:25:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7m9lq/whats_the_best_ocr_workflow_right_now/
|
johnnyXcrane
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7m9lq
| false | null |
t3_1k7m9lq
|
/r/LocalLLaMA/comments/1k7m9lq/whats_the_best_ocr_workflow_right_now/
| false | false |
self
| 10 | null |
Cache python packages from requirements.txt
| 0 |
Is there any way to cache the packages I download via a requirements.txt? I feel like whenever I try out a new UI or a new tool I am redownloading the same, generally huge packages over and over (looking at you torch). I am on Linux if that makes a difference.
| 2025-04-25T14:26:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7m9z0/cache_python_packages_from_requirementstxt/
|
Euchale
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7m9z0
| false | null |
t3_1k7m9z0
|
/r/LocalLLaMA/comments/1k7m9z0/cache_python_packages_from_requirementstxt/
| false | false |
self
| 0 | null |
Local AI has become my latest ADHD hyperfixation
| 1 |
[removed]
| 2025-04-25T14:31:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7metq/local_ai_has_become_my_latest_adhd_hyperfixation/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7metq
| false | null |
t3_1k7metq
|
/r/LocalLLaMA/comments/1k7metq/local_ai_has_become_my_latest_adhd_hyperfixation/
| false | false |
self
| 1 | null |
Art of prompting for straight raw efficient responses
| 1 |
Hi, I'm having some trouble with Bitnet:
How can I implement a stop token ?
I want to get a straight translation, ( I've tried with "to French as json" but nonetheless ... )
- no context
- no further explanations
python run\_inference.py -m models/BitNet-b1.58-2B-4T/ggml-model-i2\_s.gguf -p "Translate this sentence to French : 'Hello, how are you? the happy dog runs over the fence.', stop as the answer is given" -temp 0.1 -n 500 2>/dev/null
gives me most of the time :
Answer: 'Bonjour, comment ça va? le chien heureux courbe sur le palissade.'
<== How can I stop here, and just get this single response sentence ? Before this textes shows up then
Explanation: The translation of the sentence involves understanding the context and the meaning of each word in English and then finding the appropriate French equivalent .... but is 100km long and repeats the same sentences in loops
Thanks a zillion times for any clue, notice, comment, enlightenment ;)
| 2025-04-25T14:39:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7ml3j/art_of_prompting_for_straight_raw_efficient/
|
ben74940x
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7ml3j
| false | null |
t3_1k7ml3j
|
/r/LocalLLaMA/comments/1k7ml3j/art_of_prompting_for_straight_raw_efficient/
| false | false |
self
| 1 | null |
What is the most cost-efficient LLM I can run in cloud?
| 1 |
[removed]
| 2025-04-25T14:49:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7mthd/what_is_the_most_costefficient_llm_i_can_run_in/
|
__shobber__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7mthd
| false | null |
t3_1k7mthd
|
/r/LocalLLaMA/comments/1k7mthd/what_is_the_most_costefficient_llm_i_can_run_in/
| false | false |
self
| 1 | null |
I have made a open source Claude desktop alternative
| 1 |
[removed]
| 2025-04-25T14:55:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7mz86/i_have_made_a_open_source_claude_desktop/
|
unknownstudentoflife
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7mz86
| false | null |
t3_1k7mz86
|
/r/LocalLLaMA/comments/1k7mz86/i_have_made_a_open_source_claude_desktop/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '4Qrtq3NqExau8SSNN_EajxxxlpeRgnlWlNFcEAP661Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=108&crop=smart&auto=webp&s=23942d548d49761451bc77d1c17530c299bdf974', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=216&crop=smart&auto=webp&s=71bd01a1b7108e20c8a19c7915cc46122bc51675', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=320&crop=smart&auto=webp&s=534fba2e4ac08df3219ab077e4b8449d7fb85349', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=640&crop=smart&auto=webp&s=9e982f0fc299a971f8aa886ddf54e764b0da90af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=960&crop=smart&auto=webp&s=85aa1308c43f6bd761c8c2f2f2bba3cb8730397e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?width=1080&crop=smart&auto=webp&s=dc472d7c3472673e400f40d8998a45f5b02411ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wcXPQFSDmFwNfBHjB7z-XgRUXVOZr7fe9ZgVPZt97Ds.jpg?auto=webp&s=7847489de88f51af292241154f9cf16d001d73fd', 'width': 1200}, 'variants': {}}]}
|
|
Local AI has become my latest ADHD hyperfixation
| 1 |
[removed]
| 2025-04-25T15:00:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7n2yv/local_ai_has_become_my_latest_adhd_hyperfixation/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7n2yv
| false | null |
t3_1k7n2yv
|
/r/LocalLLaMA/comments/1k7n2yv/local_ai_has_become_my_latest_adhd_hyperfixation/
| false | false |
self
| 1 | null |
Decent local AI for my old phone!
| 1 |
[removed]
| 2025-04-25T15:30:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1k7nti3/decent_local_ai_for_my_old_phone/
|
jadydady
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k7nti3
| false | null |
t3_1k7nti3
|
/r/LocalLLaMA/comments/1k7nti3/decent_local_ai_for_my_old_phone/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.