title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Multimodal Semantic Search Made Easy
1
**TL;DR:** We’ve made the multimodal semantic search more accessible and easier. Semantic search (retrieving data by meaning rather than keyword) is well understood and not too hard to prototype. But once you add images, video, production-grade storage, metadata, multiple vector spaces, etc., your pipeline quickly becomes more complex and harder to maintain. Common processes are: 1. **Generate embeddings** for each modality (text, image, video) 2. **Store** text and metadata (e.g. timestamps, usernames) 3. **Upload** images/videos to object storage 4. **Index** each embedding in the right vector store 5. **Join** everything back together at query time Before you know it, you’ve got data scattered across half a dozen services, plus custom glue code to link them all, and that’s just the tip of the iceberg. (If you’re curious, there’s a growing body of research on true multimodal search that digs into embedding alignment, cross-modal ranking, unified vector spaces, etc.) But in most apps, semantic search is just a tool, not a main feature that differentiates your app from others. Ideally, you shouldn’t be spending too much time building and maintaining it when you’d rather be shipping your real differentiators. # CapyDB - A Chill Semantic Search I’ve been tinkering on this in grad school as a “fun project” and have developped a solution. I named it CapyDB after the capybaras, one of the most chill animals on earth. The key idea here is simple: **to make it possible to implement semantic search as easily as just wrapping the values in a JSON document with modality-aware helpers**. Below is an example. In this example, let's say we want to semantically retrieve a user profile saved in the database. Wouldn't it be very intuitive and easy if we could enable the semantic search by simply "wrapping" target values in the JSON document like below?: [Example usage of EmbJSON](https://preview.redd.it/zsbl0e3cy8xe1.png?width=1644&format=png&auto=webp&s=0aeaef37411e2a56e40daaacfec67915fbc135de) What you see in the JSON document is called **EmbJSON** (more details are here), an extended JSON developed to embed semantic search directly into JSON documents. Think of it as a decoration you use in your JSON document to tell the database which field should be indexed in what way. By declaring your intent with `EmbText`, `EmbImage`, or `EmbVideo`, you tell CapyDB exactly which fields to embed and index. It handles: * **Modality transitions**: it maps all modalities into a unified text representation space * **Embedding generation** for each modality * **Object storage** of raw images/videos * **Vector indexing** in the correct vector store # Key features **Flexible schema** With a traditional vector DB, configurations are on a per-collection basis. For example, you can't use different embedding models in the same collection. However, with CapyDB, you can adjust embedding settings, such as embedding model, chunking size, etc, on a per-field basis. You can even have two different embedding models inside a single JSON collection: [Example EmbJSON usage with multiple modality in a single JSON](https://preview.redd.it/6t89hnb4y8xe1.png?width=1630&format=png&auto=webp&s=507f0604a4223dc592940ab1ab85ddc32dbb8781) **Async by default** CapyDB processes embeddings all asynchronously by default. No matter how big the data you're saving is, you'll get an instant response from the database, so you don't have to leave your user waiting. With the traditional database, you need to have an asynchronous worker and a message broker to process embeddings asynchronously, but with CapyDB, it is already built in. **Built-in object storage** When saving media data such as images, you typically need to store them in separate object storage. CapyDB already has that internally. Moreover, it generates a URL for each image so you can render your image on the client side without hassle. # Summary CapyDB has all the necessary features that you need to start with production-level semantic search. I’d love to get your thoughts. You can check out the docs here: [link to CapyDB docs](https://docs.capydb.com).
2025-04-27T01:29:02
https://www.reddit.com/r/LocalLLaMA/comments/1k8siiy/multimodal_semantic_search_made_easy/
Available_Ad_5360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8siiy
false
null
t3_1k8siiy
/r/LocalLLaMA/comments/1k8siiy/multimodal_semantic_search_made_easy/
false
false
https://b.thumbs.redditm…MPZaqnpeJIrg.jpg
1
null
Have we hit peek LLM, or a blip in the data?
0
I've been tracking use of openrouter tokens as a good metric for real world usage of LLMs. There has been a recent downtrend, do you think we will continue to observe exponential growth in usage or have we reached the saturation point?
2025-04-27T01:45:27
https://i.redd.it/xcs81i6v8axe1.png
drwebb
i.redd.it
1970-01-01T00:00:00
0
{}
1k8stec
false
null
t3_1k8stec
/r/LocalLLaMA/comments/1k8stec/have_we_hit_peek_llm_or_a_blip_in_the_data/
false
false
https://b.thumbs.redditm…4M962vypDnRY.jpg
0
{'enabled': True, 'images': [{'id': 'ehtFHz_M_eHPcsoBJd6lZZdOwO6ArwjgZ1Wv2LNuvtM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=108&crop=smart&auto=webp&s=a94517c6156b681011130f8fc22f694a23a96503', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=216&crop=smart&auto=webp&s=248c894cbcec6590614236bf1a701d7cb7fef272', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=320&crop=smart&auto=webp&s=d253f3a1158c8395052cbfac689627eff25b1984', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=640&crop=smart&auto=webp&s=0ac6cba183a0175eeb42b3f472fcc09d8380c20e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=960&crop=smart&auto=webp&s=65449d05d0225bf3ce8ac060d0d5ba95fa746cee', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?width=1080&crop=smart&auto=webp&s=22f5cd19364cfa52bad21ef556bdeee84ac756c2', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/xcs81i6v8axe1.png?auto=webp&s=cde8f671db9691255b657df44d2dc2c679a40541', 'width': 1080}, 'variants': {}}]}
I built a Chrome Extension (WebAI) to Chat with Webpages Using Your Local LLMs
31
Hey r/LocalLLaMA folks! I wanted to share a Chrome extension I've been working on called **WebAI**. The idea is simple: browse to any webpage, pop open the extension, and you can get an AI-powered summary or start asking questions about the content, or listen spoken answer, all using **your own local LLM** (like Ollama) and local Kokoro voice generation. Demo: https://reddit.com/link/1k8sycx/video/juzws2qp9axe1/player Here's what it does:
2025-04-27T01:52:55
https://www.reddit.com/r/LocalLLaMA/comments/1k8sycx/i_built_a_chrome_extension_webai_to_chat_with/
solidavocadorock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8sycx
false
null
t3_1k8sycx
/r/LocalLLaMA/comments/1k8sycx/i_built_a_chrome_extension_webai_to_chat_with/
false
false
self
31
{'enabled': False, 'images': [{'id': 'o7moRBeeC8hI1Oh-2QqXHyHW08iHIJ6vqrvkjHzsYMs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=108&crop=smart&auto=webp&s=41075eb7b0399fb4db43ff764c38aeaeebcb2ebd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=216&crop=smart&auto=webp&s=eae3d20518f2e992f53840776203f4554f4241b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=320&crop=smart&auto=webp&s=96b859aab39121b836d209d9b138e150dfdef534', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=640&crop=smart&auto=webp&s=c80daa19b9a99c501def3a67c7c50ba1e7d0d810', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=960&crop=smart&auto=webp&s=7601f0ab7385976e0a69b1f3834b1baa9c6366ef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?width=1080&crop=smart&auto=webp&s=b0156d542205f960b8dd1ea2440e99443ccd63f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GzsnSX_znc2F3IG5HYUyhLg93Pt10YnH5n0RlHy-BHs.jpg?auto=webp&s=1ef626a1170e3d5ccaf9d9284dd47d860ff6fbe4', 'width': 1200}, 'variants': {}}]}
Multi agent AI
1
[removed]
2025-04-27T02:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1k8t8f5/multi_agent_ai/
committedAF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8t8f5
false
null
t3_1k8t8f5
/r/LocalLLaMA/comments/1k8t8f5/multi_agent_ai/
false
false
self
1
null
New Reasoning Model from NVIDIA (AIME is getting saturated at this point!)
98
(disclaimer, it's just a qwen2.5 32b fine tune)
2025-04-27T02:08:41
https://huggingface.co/nvidia/OpenMath-Nemotron-32B
random-tomato
huggingface.co
1970-01-01T00:00:00
0
{}
1k8t8z9
false
null
t3_1k8t8z9
/r/LocalLLaMA/comments/1k8t8z9/new_reasoning_model_from_nvidia_aime_is_getting/
false
false
https://b.thumbs.redditm…0DVXatqWaxwQ.jpg
98
{'enabled': False, 'images': [{'id': 'G6DiXFdc7z8iR00FCcShvfaqmaJipj0LbW---aFbDAA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=108&crop=smart&auto=webp&s=61c56adcf8a5f2c683913522452243e25db90500', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=216&crop=smart&auto=webp&s=f0c9bde86ccb49cd995cb10bf0f57f24846093cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=320&crop=smart&auto=webp&s=351e31e40deafc15040800d92fdca35b8fdac38f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=640&crop=smart&auto=webp&s=4cd45bd18ce212c176a456f3dc0390fe542b9c06', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=960&crop=smart&auto=webp&s=a32b479a3f217a873200f61f66967ba78427984c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?width=1080&crop=smart&auto=webp&s=750a358f42f4193c85e43c14d0164262f23788d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6doJTf1GdAI5T-9SwoTSuPFBmq9PsTM3q-ChSWlGb_o.jpg?auto=webp&s=5913304eab75a9136f4681cb5e734797bfa95b36', 'width': 1200}, 'variants': {}}]}
Trying to understand chunked prefill scheduling policy for vLLM
9
I've already perused https://docs.vllm.ai/en/latest/performance/optimization.html and I believe I understand the basic concepts of what prefill and decoding are, plus the general concept of pipelining inference and dynamic batching. Nevertheless, I have the following questions: - Suppose that my prefills are usually small, say 256 tokens. What does it mean for me to set a max num_batched_tokens as high as 4096? Will the scheduler wait for 16 prefills to be scheduled, and then compute them all at once? - As I understand it the output of a prefill operation is the KV cache for the tokens in the prefill, so consider what happens after those prefills are computed, and suppose you don't have enough memory to hold 16 KV caches at once for the whole decode operation. Since for every prefill operation you also need to do a decode operation, and the decode operations may take way more space, don't we have to evacuate the prefilled operations? If so, what was the point of computing them? If we can evacuate them to something like CPU memory, then does that really save any time at all (since as I understand it, inference is typically bound by I/O between the GPU memory bus and the compute cores, let alone the presumably much longer I/O time between the CPU and GPU)? - If my output sequences are on the order of thousands of tokens (as they would be for a reasoning model), will the difference in performance due to the changed scheduling policy then be effectively negligible? Is there any situation in which it is actually worse (e.g due to movement of memory)? - Finally, and a bit unrelatedly, suppose that I want to run inference on ten copies of the same prompt. So, I can benefit from the fact that all ten prefills are the same, but from there there will not be any benefits to the runtime of the decode stage, right? (Also, how do I benefit from the fact that all ten prefills are the same with vLLM?)
2025-04-27T02:35:59
https://www.reddit.com/r/LocalLLaMA/comments/1k8tqm3/trying_to_understand_chunked_prefill_scheduling/
lechatonnoir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8tqm3
false
null
t3_1k8tqm3
/r/LocalLLaMA/comments/1k8tqm3/trying_to_understand_chunked_prefill_scheduling/
false
false
self
9
null
[D] Which change LLMs more, SFT or RL-mothods?
0
For LLMs, the training process is pre-train -> SFT -> RL. Based on my understanding, SFT is to make LLMs can solve specific tasks, like coding, follow instruct. RL is to make LLMs study express themselves like human. If it's correct, SFT will change LLMs parameters more than RL-methods. My question is If I do SFT on a model which already processed by SFT and RL, Would I destroy the RL performance on it? Or, is there some opinions to validate my thought? Thanks very much.
2025-04-27T03:34:30
https://www.reddit.com/r/LocalLLaMA/comments/1k8urbh/d_which_change_llms_more_sft_or_rlmothods/
Logical_Divide_3595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8urbh
false
null
t3_1k8urbh
/r/LocalLLaMA/comments/1k8urbh/d_which_change_llms_more_sft_or_rlmothods/
false
false
self
0
null
Truly self-evolving AI agent
0
chat AI (2023) -> AI agent (2204) -> MCP (early 2025) -> ??? (2025\~) So... for an AI agent to be truly self-evolving, it has to have access to modify ITSELF, not only the outside world that it interacts with. This means that it has to be able to modify its source code by itself. To do this, the most straightforward way is to give the AI a whole server to run itself, with the ability to scan its source code, modify it, and reboot the server to kind of "update" its version. If things go well, this would show us something interesting.
2025-04-27T04:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1k8vy1v/truly_selfevolving_ai_agent/
Available_Ad_5360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8vy1v
false
null
t3_1k8vy1v
/r/LocalLLaMA/comments/1k8vy1v/truly_selfevolving_ai_agent/
false
false
self
0
null
Runtime Identity Drift in LLMs — Can We Stabilize Without Memory?
2
I’ve been working on stabilizing role identity in LLM outputs over long interactions — without relying on memory, logs, or retraining. Problem: Most multi-agent chains and LLM workflows suffer from role drift and behavioral collapse after a few hundred turns. Context windowing and prompt engineering only delay the inevitable. Experiment: I built a runtime coherence layer (called SAGE) that maintains behavioral identity using real-time feedback signals (Cr, ∆Cr, RTR) — without storing past interactions. > https://i.redd.it/2jd1j8kecbxe1.gif > https://preview.redd.it/wp5z7ysfcbxe1.png?width=1000&format=png&auto=webp&s=22e7eb38d9d9bd0fe0cfe5a344d95656596c7d5f > P.S: I am currently seeking **academic validation** of the runtime model through collaboration with university research labs. If any research teams, lab members, or independent researchers are interested: * I can provide a **secure demo version** of the system for evaluation purposes. * In exchange, I would request a **brief written technical assessment** (positive or critical) from the lab or research group. I'll leave a couple of links if you're interested in the details: [SAGE Runtime Demo | GidHub](https://github.com/Edgeev/SAGE-AI-Layer-0-AGI-runtime-LLM)
2025-04-27T05:32:31
https://www.reddit.com/r/LocalLLaMA/comments/1k8wnr3/runtime_identity_drift_in_llms_can_we_stabilize/
Robin898989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8wnr3
false
null
t3_1k8wnr3
/r/LocalLLaMA/comments/1k8wnr3/runtime_identity_drift_in_llms_can_we_stabilize/
false
false
https://b.thumbs.redditm…had5429W2ecY.jpg
2
{'enabled': False, 'images': [{'id': '_UTOV02Xu5ZneXPhUlw3gyCL3D_K8yc-wVIhlkrDh5U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=108&crop=smart&auto=webp&s=aa609cfd515e33dac1e2fedb6fec2bd9b8b338df', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=216&crop=smart&auto=webp&s=9224eeac14f64fde5f007fe6d1030d1864c21e89', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=320&crop=smart&auto=webp&s=1a6cde9ddb04007a91760d14ebbbc3b1f9bf8897', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=640&crop=smart&auto=webp&s=b40883842695cee0f4555eab87f3d5e6cf9fbb3c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=960&crop=smart&auto=webp&s=f8147a5d04dc6ad8c2f9c532f1d959a7f56d8d4e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?width=1080&crop=smart&auto=webp&s=9b0bac7eb16bf5c8aa63b320896d24f064063f41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MNDlgb8UgOhx7QZF9vEw-lt8d-BPNboqQLI5MVZ0v_A.jpg?auto=webp&s=479ed5287af31e86702bee700eccfb8dc8910b27', 'width': 1200}, 'variants': {}}]}
Runtime Identity Drift in LLMs — Can We Stabilize Without Memory?
6
I’ve been working on stabilizing role identity in LLM outputs over long interactions — without relying on memory, logs, or retraining. Problem: Most multi-agent chains and LLM workflows suffer from role drift and behavioral collapse after a few hundred turns. Context windowing and prompt engineering only delay the inevitable. https://i.redd.it/2jd1j8kecbxe1.gif Experiment: I built a runtime coherence layer (called SAGE) that maintains behavioral identity using real-time feedback signals (Cr, ∆Cr, RTR) — without storing past interactions. https://preview.redd.it/wp5z7ysfcbxe1.png?width=1000&format=png&auto=webp&s=22e7eb38d9d9bd0fe0cfe5a344d95656596c7d5f Actually now, I feel a bit like the early creators of LoRA — trying to push an idea that doesn’t yet have “official” academic traction. I’ve also recorded a couple of **live test runs** (posted on YouTube) where you can see the behavior under drift pressure — happy to share links if you’re curious. P.S: I am currently seeking **academic validation** of the runtime model through collaboration with university research labs. If any research teams, lab members, or independent researchers are interested: * I can provide a **secure demo version** of the system for evaluation purposes. * In exchange, I would request a **brief written technical assessment** (positive or critical) from the lab or research group. I can drop links to videos, reports, and demos in the comments.
2025-04-27T05:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1k8wvop/runtime_identity_drift_in_llms_can_we_stabilize/
Robin898989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8wvop
false
null
t3_1k8wvop
/r/LocalLLaMA/comments/1k8wvop/runtime_identity_drift_in_llms_can_we_stabilize/
false
false
self
6
null
Overwhelmed by the number of Gemma 3 27B QAT variants
82
For the Q4 quantization alone, I found 3 variants: * `google/gemma-3-27b-it-qat-q4_0-gguf`, official release, 17.2GB, seems to have some token-related issues according to this [discussion](https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf/discussions/3) * `stduhpf/google-gemma-3-27b-it-qat-q4_0-gguf-small`, requantized, 15.6GB, states to fix the issues mentioned above. * `jaxchang/google-gemma-3-27b-it-qat-q4_0-gguf-fix`, further derived from stduhpf's variant, 15.6GB, states to fix some more issues? Even more variants that are derived from `google/gemma-3-27b-it-qat-q4_0-unquantized`: * `bartowski/google_gemma-3-27b-it-qat-GGUF` offers llama.cpp-specific quantizations from Q2 to Q8. * `unsloth/gemma-3-27b-it-qat-GGUF` also offers Q2 to Q8 quantizations, and I can't figure what they have changed because the model description looks like copy-pasta. How am I supposed to know which one to use?
2025-04-27T06:16:58
https://www.reddit.com/r/LocalLLaMA/comments/1k8xb3k/overwhelmed_by_the_number_of_gemma_3_27b_qat/
iwinux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8xb3k
false
null
t3_1k8xb3k
/r/LocalLLaMA/comments/1k8xb3k/overwhelmed_by_the_number_of_gemma_3_27b_qat/
false
false
self
82
{'enabled': False, 'images': [{'id': 'gnoSHQF7rXglfA8pHbpnF-VvHqjLRP6y-NWIdvzauB8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=108&crop=smart&auto=webp&s=4269d49975c825c0dbc4a13759d243707ac0a253', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=216&crop=smart&auto=webp&s=e68095a5a27d9ddbeb22f3d6c706aa2168cfd050', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=320&crop=smart&auto=webp&s=72f4eb739cbcb8c164a9d5c8054a8ec5f3e7c1f8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=640&crop=smart&auto=webp&s=379d21770137de10c51e9c0ce42b20a48fa5f9fd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=960&crop=smart&auto=webp&s=2c982361d630b9d2a2ccae08be736e117fba669f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?width=1080&crop=smart&auto=webp&s=eb1e84b865db9818ab24e1caee1e6d6781247b47', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9Ni7p7vG1ypYJSOztD3eZzTd-4rVFPbWgFMXgQf2WmM.jpg?auto=webp&s=b0780bbf44c3f7e4a7c1efe2f7c102017c7310d7', 'width': 1200}, 'variants': {}}]}
[ANNOUNCEMENT] 🚀 Behold, an AI Assistant That Literally Only Works for Chicken Nuggets (and we're not even sorry)
1
[removed]
2025-04-27T06:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1k8xj11/announcement_behold_an_ai_assistant_that/
LsDmT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8xj11
false
null
t3_1k8xj11
/r/LocalLLaMA/comments/1k8xj11/announcement_behold_an_ai_assistant_that/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
🚀 [Release] llama-cpp-python 0.3.8 (CUDA 12.8) Prebuilt Wheel + Full Gemma 3 Support (Windows x64)
57
Hi everyone, After a lot of work, I'm excited to share a **prebuilt CUDA 12.8 wheel** for **llama-cpp-python (version 0.3.8)** — built specifically for **Windows 10/11 (x64)** systems! # ✅ Highlights: * **CUDA 12.8 GPU acceleration** fully enabled * **Full Gemma 3 model support** (1B, 4B, 12B, 27B) * **Built against llama.cpp b5192** (April 26, 2025) * **Tested and verified** on a dual-GPU setup (3090 + 4060 Ti) * **Working production inference** at **16k context length** * **No manual compilation** needed — just `pip install` and you're running! # 🔥 Why This Matters Building `llama-cpp-python` with CUDA on Windows is notoriously painful — CMake configs, Visual Studio toolchains, CUDA paths... it’s a nightmare. This wheel **eliminates all of that**: * No CMake. * No Visual Studio setup. * No manual CUDA environment tuning. **Just download the** `.whl`**, install with pip, and you're ready to run Gemma 3 models on GPU immediately.** # ✨ Notes * I haven't been able to find **any other prebuilt llama-cpp-python wheel** supporting **Gemma 3 + CUDA 12.8** on Windows — so I thought I'd post this ASAP. * I know you Linux folks are way ahead of me — but hey, now Windows users can play too! 😄
2025-04-27T06:53:42
https://github.com/boneylizard/llama-cpp-python-cu128-gemma3/releases
Gerdel
github.com
1970-01-01T00:00:00
0
{}
1k8xu4d
false
null
t3_1k8xu4d
/r/LocalLLaMA/comments/1k8xu4d/release_llamacpppython_038_cuda_128_prebuilt/
false
false
https://b.thumbs.redditm…FYY-Ip9ecCjw.jpg
57
{'enabled': False, 'images': [{'id': 'E8h3OWxs5wdnP6EoEOMm6vgkmD4dzyURFSxQE9uklIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=108&crop=smart&auto=webp&s=075cade53df95d2e7d4d6065a4a9f122d38fd31e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=216&crop=smart&auto=webp&s=4be9c5afba12d81e8b351c48f5feec0e05f1c3b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=320&crop=smart&auto=webp&s=551aa46073a66d67a4f60bd5c6a4ffbfff6ff186', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=640&crop=smart&auto=webp&s=984dc75cd6fc1d4afcf245873710ed59ecf086db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=960&crop=smart&auto=webp&s=2c1ee9fd13f401e3d1c7e2def1b92300903ef0fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?width=1080&crop=smart&auto=webp&s=662c4f95293339209455d0a611bcee540b3ebc2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fEQ0sRkP5Cc9pEvs5-UwG3ZTsuSDMCRcakJgt0TeA4k.jpg?auto=webp&s=743748c4e5a9d567ee3580bf186697fd7cf8c208', 'width': 1200}, 'variants': {}}]}
Finally got ~10t/s DeepSeek V3-0324 hybrid (FP8+Q4_K_M) running locally on my RTX 4090 + Xeon with with 512GB RAM, KTransformers and 32K context
202
Hey everyone, Just wanted to share a fun project I have been working on. I managed to get DeepSeek V3-0324 onto my single RTX 4090 + Xeon box running 512 GB RAM using KTransformers and a clever FP8+GGUF hybrid trick from KTransformers. Attention & FF layers on GPU (FP8): Cuts VRAM down to \~24 GB, so your 4090 can handle the critical parts lightning fast. Expert weights on CPU (4-bit GGUF): All the huge MoE banks live in system RAM and load as needed. End result: I’m seeing about \~10 tokens/sec with a 32K context window—pretty smooth for local tinkering. KTransformers made it so easy with its Docker image. It handles the FP8 kernels under the hood and shuffles data between CPU/GPU token by token. I posted a llama-4 maverick run on KTransformers a couple of days back and got good feedback on here. So I am sharing this build as well, in case it helps anyone out! My Build: Motherboard: ASUS Pro WS W790E-SAGE SE. Why This Board? 8-channel DDR5 ECC RAM, I have 8x64 GB ECC DDR5 RAM 4800MHz CPU with AI & ML Boost: Engineering Sample QYFS (56C/112T!) I get consistently 9.5-10.5 tokens per second with this for decode. And I get 40-50 prefill speed. If you would like to checkout the youtube video of the run: [https://www.youtube.com/watch?v=oLvkBZHU23Y](https://www.youtube.com/watch?v=oLvkBZHU23Y) My Hardware Build and reasoning for picking up this board: [https://www.youtube.com/watch?v=r7gVGIwkZDc](https://www.youtube.com/watch?v=r7gVGIwkZDc)
2025-04-27T07:02:36
https://www.reddit.com/r/LocalLLaMA/comments/1k8xyvp/finally_got_10ts_deepseek_v30324_hybrid_fp8q4_k_m/
texasdude11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8xyvp
false
null
t3_1k8xyvp
/r/LocalLLaMA/comments/1k8xyvp/finally_got_10ts_deepseek_v30324_hybrid_fp8q4_k_m/
false
false
self
202
{'enabled': False, 'images': [{'id': 'UERt3P8m_Q0EVWC9vTGs9DiJimedHCeVAnjAsyy7ugM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rCN53aqWHZcxQWx8S9QQB-0OLrhXDJ2zealFngKCVOs.jpg?width=108&crop=smart&auto=webp&s=69c2ed1bd7fc612ad33663f4ddd27f4893ce2d4b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rCN53aqWHZcxQWx8S9QQB-0OLrhXDJ2zealFngKCVOs.jpg?width=216&crop=smart&auto=webp&s=c0ecd601b9bfb75aa083923a092d55e3b8193580', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rCN53aqWHZcxQWx8S9QQB-0OLrhXDJ2zealFngKCVOs.jpg?width=320&crop=smart&auto=webp&s=185a34e3b9fb39ab8ab14b3acae271d2a112541d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rCN53aqWHZcxQWx8S9QQB-0OLrhXDJ2zealFngKCVOs.jpg?auto=webp&s=90efd12dc403eef3c58e48d5cf5962f6e240958c', 'width': 480}, 'variants': {}}]}
Where can I host an LLM model like Gemma 3 for free and create an API for it?
1
[removed]
2025-04-27T07:36:47
https://www.reddit.com/r/LocalLLaMA/comments/1k8yg9h/where_can_i_host_an_llm_model_like_gemma_3_for/
PumpkinNarrow6339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8yg9h
false
null
t3_1k8yg9h
/r/LocalLLaMA/comments/1k8yg9h/where_can_i_host_an_llm_model_like_gemma_3_for/
false
false
self
1
null
TNG Tech releases Deepseek-R1-Chimera, adding R1 reasoning to V3-0324
267
>Today we release DeepSeek-R1T-Chimera, an open weights model adding R1 reasoning to [@deepseek\_ai](https://x.com/deepseek_ai) V3-0324 with a novel construction method. In benchmarks, it appears to be as smart as R1 but much faster, using 40% fewer output tokens. The Chimera is a child LLM, using V3s shared experts augmented with a custom merge of R1s and V3s routed experts. It is not a finetune or distillation, but constructed from neural network parts of both parent MoE models. A bit surprisingly, we did not detect defects of the hybrid child model. Instead, its reasoning and thinking processes appear to be more compact and orderly than the sometimes very long and wandering thoughts of the R1 parent model. Model weights are on [@huggingface](https://x.com/huggingface), just a little late for [\#ICLR2025](https://x.com/hashtag/ICLR2025?src=hashtag_click). Kudos to [@deepseek\_ai](https://x.com/deepseek_ai) for V3 and R1! [https://x.com/tngtech/status/1916284566127444468](https://x.com/tngtech/status/1916284566127444468)
2025-04-27T07:44:51
https://huggingface.co/tngtech/DeepSeek-R1T-Chimera
ayyndrew
huggingface.co
1970-01-01T00:00:00
0
{}
1k8yk8w
false
null
t3_1k8yk8w
/r/LocalLLaMA/comments/1k8yk8w/tng_tech_releases_deepseekr1chimera_adding_r1/
false
false
https://a.thumbs.redditm…HzJN0BANgoz4.jpg
267
{'enabled': False, 'images': [{'id': 'j1bMoAe_doYM8szE_jm7F6Ezt1siEfHxMPgniDv0Fms', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=108&crop=smart&auto=webp&s=b2ddc3a7344c48b00f7873c976a2864286ab328e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=216&crop=smart&auto=webp&s=8f8ee2ed619734f73339ae31e1d6dd287f802bc2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=320&crop=smart&auto=webp&s=0b247c54b420cc1bd763e846195310931532c0e6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=640&crop=smart&auto=webp&s=3fa5483c63f6faf71fe54d107797d498abbdd369', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=960&crop=smart&auto=webp&s=f334e7f4efb7cf9e4bd1f469e0f282dd3aac6170', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?width=1080&crop=smart&auto=webp&s=b9a9bffd796aa7893bd3f77fc3597dfe0a45b6de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1No-ofpBVwLtrVVqwYujJ6PJAf7A4c3ZgbbSJrDaop0.jpg?auto=webp&s=7402b39d58c79204d411b3ffa4ded234af490dc5', 'width': 1200}, 'variants': {}}]}
Hybrid LLM-SLM Agent Architecture for Domain-Specific Applications (project)
1
I believe Small Language Models (SLMs) will become increasingly capable and can be fine-tuned for niche/personal use cases, collaborating with agents and Large Language Models (LLMs) for better results. They might become a core part of Agent As A Service (AaaS) or Personalized Agent as a Service. I've been working on this side project for fun and learning. While not perfect, it's functional and appears to generate higher quality output than using a general-purpose chatbot directly. As a college student without a medical background, I can't fully evaluate the accuracy, but the architecture shows promise. **Current Stack:** * LLM: Llama 3.3 70B versatile (via Groq) * SLM: LightEternal-Llama3-Merge-Biomed-8B-GGUF(medical fine-tuned, via Ollama) The primary focus is the **system architecture**, designed for adaptability and efficiency: 1. **Orchestration Core:** An initial agent assesses query complexity. For complex queries, it dynamically selects **only the necessary downstream agents** and decomposes the main task into specific sub-tasks for each selected agent. This optimizes resource use. 2. **Modular Agent Design (LangGraph):** The current implementation includes agents for web search (Tavily), domain-specific knowledge (Medical SLM), compilation, and quality control/reflection (also SLM-driven). This graph structure allows straightforward addition or replacement of agents for different domains (e.g., finance, legal). Parallel execution is utilized where feasible. 3. **Specialized SLM Integration:** The system employs a fine-tuned medical SLM for high-fidelity domain tasks and quality assurance (reflection). 4. **Hypothesis on SLMs:** This project supports the view that specialized SLMs can function effectively as expert components – acting as filters, validators, or focused knowledge sources – within larger LLM-driven or agentic systems, particularly for niche applications. I'm using Ollama to run the fine-tuned model locally. (Note: There's a type error in the logs because LangChain doesn't support structured output for Ollama, so I had to create it myself, resulting in type errors in logs, but everything works fine)
2025-04-27T07:59:29
https://v.redd.it/t7g7gbfb3cxe1
DeathShot7777
/r/LocalLLaMA/comments/1k8yrbe/hybrid_llmslm_agent_architecture_for/
1970-01-01T00:00:00
0
{}
1k8yrbe
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t7g7gbfb3cxe1/DASHPlaylist.mpd?a=1748462373%2CZDg2MDlhMzY4YzZjODZlZjY0NTYzMTdkYTg3MDExMDdkNWQ5NWQyMTE5ZDE0YmViZTc2YjYyMDNjZGZiNzY1Yg%3D%3D&v=1&f=sd', 'duration': 160, 'fallback_url': 'https://v.redd.it/t7g7gbfb3cxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/t7g7gbfb3cxe1/HLSPlaylist.m3u8?a=1748462373%2CMzI3ODE1MDMwZWQ3ZDU0YzUwNDRhY2I3MWI4ODZlNzUzMWRiM2RhNjVkYTRiMDhlMTZjYmE4YmYxYzJkOWYyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t7g7gbfb3cxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k8yrbe
/r/LocalLLaMA/comments/1k8yrbe/hybrid_llmslm_agent_architecture_for/
false
false
https://external-preview…a3c703b910474cc1
1
{'enabled': False, 'images': [{'id': 'a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=108&crop=smart&format=pjpg&auto=webp&s=ccf6a26be43d6321da8899d49ff26d8f6efec444', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=216&crop=smart&format=pjpg&auto=webp&s=0baa1ac1cf89ba65597c76034def3074387bd513', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=320&crop=smart&format=pjpg&auto=webp&s=2a0f927d59010efc43096e2c6e029f932c61f445', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=640&crop=smart&format=pjpg&auto=webp&s=84c0abbbe76d0b894af12e9c291a3cf277110786', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=960&crop=smart&format=pjpg&auto=webp&s=e4fa77dac4eeac96aaee3f2bf515d17ee690f8bf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?width=1080&crop=smart&format=pjpg&auto=webp&s=204dfc8f50f6de591374f64177f7603f6cee2e44', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/a3h0Y3ljZmIzY3hlMa0UJxFCpiVcwZoxskjwZWl_Fr0hhl5zfl5fC1D3ey4f.png?format=pjpg&auto=webp&s=3856dd1b85a82cd915e95e627614fa06edc0e238', 'width': 2560}, 'variants': {}}]}
Made Mistral 24B code like a senior dev by making it recursively argue with itself
142
Been experimenting with local models lately and built something that dramatically improves their output quality without fine-tuning or fancy prompting. I call it CoRT (Chain of Recursive Thoughts). The idea is simple: make the model generate multiple responses, evaluate them, and iteratively improve. Like giving it the ability to second-guess itself. With Mistral 24B Tic-tac-toe game went from basic CLI(Non CoRT) to full OOP with AI opponent(CoRT) What's interesting is that smaller models benefit even more from this approach. It's like giving them time to "think harder" actually works, but i also imagine itd be possible with some prompt tweaking to get it to heavily improve big ones too. GitHub: [https://github.com/PhialsBasement/Chain-of-Recursive-Thoughts] Technical details: - Written in Python - Wayyyyy slower but way better output - Adjustable thinking rounds (1-5) + dynamic - Works with any OpenRouter-compatible model
2025-04-27T07:59:40
https://www.reddit.com/gallery/1k8yrem
HearMeOut-13
reddit.com
1970-01-01T00:00:00
0
{}
1k8yrem
false
null
t3_1k8yrem
/r/LocalLLaMA/comments/1k8yrem/made_mistral_24b_code_like_a_senior_dev_by_making/
false
false
https://b.thumbs.redditm…GjXs1EiERsCU.jpg
142
null
Questions regarding laptop purchase for local llms
0
I currently have a vivobook with a low-powered 13900h laptop with 16 GB of memory, a 1 TB SSD and a 2.8k OLED screen. Despite it being just 2 years old a lot of things about my laptop have started to give me trouble, like my Bluetooth, wifi card, and my battery life has dropped a lot, and my ram usage is almost always at 70% (thanks chrome). Lately I've been getting into machine learning and data science, and training even small models, or just running local transformers libraries or gguf files takes a lot of time, and almost always gets my ram up to 99%. I am a second year (finishing up) Computer science student. So should I consider buying a new laptop? In a situation like that I have 2 likely possibilities 1. get a laptop with 32 gigs of ram, likely a lenovo yoga 2. get a laptop with 16 gigs of ram and a 4060 (i.e 8 gb vram), i.e the HP omen transcend 14 please do help me out
2025-04-27T08:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1k8yt7x/questions_regarding_laptop_purchase_for_local_llms/
arnab_best
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8yt7x
false
null
t3_1k8yt7x
/r/LocalLLaMA/comments/1k8yt7x/questions_regarding_laptop_purchase_for_local_llms/
false
false
self
0
null
Hybrid LLM-SLM Agent Architecture for Domain-Specific Applications (side project)
1
# I believe Small Language Models (SLMs) will become increasingly capable and can be fine-tuned for niche/personal use cases, collaborating with agents and Large Language Models (LLMs) for better results. They might become a core part of Agent As A Service (AaaS) or Personalized Agent as a Service. I've been working on this side project for fun and learning. While not perfect, it's functional and appears to generate higher quality output than using a general-purpose chatbot directly. As a college student without a medical background, I can't fully evaluate the accuracy, but the architecture shows promise. **Current Stack:** * LLM: Llama 3.3 70B versatile (via Groq) * SLM: LightEternal-Llama3-Merge-Biomed-8B-GGUF(medical fine-tuned, via Ollama) The primary focus is the **system architecture**, designed for adaptability and efficiency: 1. **Orchestration Core:** An initial agent assesses query complexity. For complex queries, it dynamically selects **only the necessary downstream agents** and decomposes the main task into specific sub-tasks for each selected agent. This optimizes resource use. 2. **Modular Agent Design (LangGraph):** The current implementation includes agents for web search (Tavily), domain-specific knowledge (Medical SLM), compilation, and quality control/reflection (also SLM-driven). This graph structure allows straightforward addition or replacement of agents for different domains (e.g., finance, legal). Parallel execution is utilized where feasible. 3. **Specialized SLM Integration:** The system employs a fine-tuned medical SLM for high-fidelity domain tasks and quality assurance (reflection). 4. **Hypothesis on SLMs:** This project supports the view that specialized SLMs can function effectively as expert components – acting as filters, validators, or focused knowledge sources – within larger LLM-driven or agentic systems, particularly for niche applications. I'm using Ollama to run the fine-tuned model locally. (Note: There's a type error in the logs because LangChain doesn't support structured output for Ollama, so I had to create it myself, resulting in type errors in logs, but everything works fine) Repo link: [https://github.com/abhigyanpatwari/Medical-Research-Assistant](https://github.com/abhigyanpatwari/Medical-Research-Assistant)
2025-04-27T08:04:41
https://v.redd.it/wj1t713e4cxe1
DeathShot7777
/r/LocalLLaMA/comments/1k8yu7f/hybrid_llmslm_agent_architecture_for/
1970-01-01T00:00:00
0
{}
1k8yu7f
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wj1t713e4cxe1/DASHPlaylist.mpd?a=1748462688%2CY2UwZTk2NmFiYmJmZjFmMTVlMTNhNzdjOGExZjQyNzA4YzA0ZGFlM2ZlODQ5NjlkZGU5MGUxY2IwZjZmNTU1Mg%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/wj1t713e4cxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wj1t713e4cxe1/HLSPlaylist.m3u8?a=1748462688%2CMDY3NGE2OTkyMTdkYTNkZmViYTZlOGQwOTJiMWJiMzFkNTE2YTM4MTQ3NDFkNGM0OGRhYTdmYTA5MjEwZTI1MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wj1t713e4cxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k8yu7f
/r/LocalLLaMA/comments/1k8yu7f/hybrid_llmslm_agent_architecture_for/
false
false
https://external-preview…ffbbc028bf74a9a4
1
{'enabled': False, 'images': [{'id': 'dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=108&crop=smart&format=pjpg&auto=webp&s=09000492e39b33fea5216a63e4b24c15b23c5b19', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=216&crop=smart&format=pjpg&auto=webp&s=d570a61e5d1e90c10bfcc4e972353b24ba0f7464', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=320&crop=smart&format=pjpg&auto=webp&s=5a012d3d17d340ec6f8f0b10052ee5e74c987124', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=640&crop=smart&format=pjpg&auto=webp&s=e9146898e48146a2ecfa7e0417b16997a117efbf', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=960&crop=smart&format=pjpg&auto=webp&s=169ce4e43ee82fde03575c595c3f672332d894f5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f9f52ee2948d09930f2fccdd0693c07c0293630a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dm93bGg1M2U0Y3hlMVdrgZcNc4v7Jy1sOiSamWiNk0l4YlfaSFyergHJfj_G.png?format=pjpg&auto=webp&s=2012ca5786424505cf90431403d3437ec429473c', 'width': 1920}, 'variants': {}}]}
Can we trust Huggingface CEO? He tweeted something that seemed to allude to Deepseek R2.
1
2025-04-27T08:09:54
https://x.com/ClementDelangue/status/1916345020791001181
luckbossx
x.com
1970-01-01T00:00:00
0
{}
1k8ywsa
false
null
t3_1k8ywsa
/r/LocalLLaMA/comments/1k8ywsa/can_we_trust_huggingface_ceo_he_tweeted_something/
false
false
https://b.thumbs.redditm…scS2mv4QDcWQ.jpg
1
{'enabled': False, 'images': [{'id': '2GDW2QRQpOvT52sSL7EEcJmd7fnSHMVwAFon2KcRzdw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=108&crop=smart&auto=webp&s=7a926ada1c63d0bfb3b2108f858de87aa01a88c1', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=216&crop=smart&auto=webp&s=7e827c15bfb7c333a975c9051b1f55dcd9208d49', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=320&crop=smart&auto=webp&s=e18b7def3533d58f92d31414a44efe4ed04fd9ff', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=640&crop=smart&auto=webp&s=accfee83cde078da8a38e7da716059c568564a8d', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=960&crop=smart&auto=webp&s=296c116f083173a6911f21dc822f224c32bdae00', 'width': 960}, {'height': 616, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?width=1080&crop=smart&auto=webp&s=da235497de2cd448b3438df7b2fb8f110198a116', 'width': 1080}], 'source': {'height': 708, 'url': 'https://external-preview.redd.it/DfM2m5MYTFhP7ngAIOlVFrfGj0Admq-ROeR7cOyHaVM.jpg?auto=webp&s=70ffaa5a1e50a7ab05c651b30c8149065f9acc0d', 'width': 1240}, 'variants': {}}]}
Fine tune tiny llama for summarization
2
Hi I'm using tiny llama on Ollama locally on a very limited piece of hardware. I'm trying to summarize a structured meeting transcript but the results are inconsistent. Any tips on fine tuning this? Would few shot help? Should I train it separately first, if so any good tips on how to achieve this? Thanks
2025-04-27T08:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1k8z1s5/fine_tune_tiny_llama_for_summarization/
andrethedev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8z1s5
false
null
t3_1k8z1s5
/r/LocalLLaMA/comments/1k8z1s5/fine_tune_tiny_llama_for_summarization/
false
false
self
2
null
RAG for LLM - Datavizion
1
[removed]
2025-04-27T09:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1k8zsnd/rag_for_llm_datavizion/
zoner01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k8zsnd
false
null
t3_1k8zsnd
/r/LocalLLaMA/comments/1k8zsnd/rag_for_llm_datavizion/
false
false
self
1
{'enabled': False, 'images': [{'id': 'K4Qabu7RDtrcPucOAdcPVXOh48W5J6ZR3kuMGEkRvxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=108&crop=smart&auto=webp&s=23a4217fa408712b6c7dded91f7da2892ec704a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=216&crop=smart&auto=webp&s=2402887c35dd825a25adb71cab50756a65bde281', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=320&crop=smart&auto=webp&s=f9f0f92547215e4220154a1f3c314bdd0487a240', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=640&crop=smart&auto=webp&s=8925d3a0110b83a13b2a0d54da05af69842efb71', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=960&crop=smart&auto=webp&s=483774be2e6151d78c5f2b5ef942485c50d86596', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?width=1080&crop=smart&auto=webp&s=bb5ec771bb5e936e3eb814aa120788cff08fc567', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PmWAS1mTUjYELqezsB070LwKFHQ7Wdp-f0LJStdcf0Y.jpg?auto=webp&s=7116332e9524b4c0b8612db3d863354ac90d69d1', 'width': 1200}, 'variants': {}}]}
Llama.cpp CUDA Setup - Running into Issues - Is it Worth the Effort?
11
Hi everyone, I'm exploring alternatives to Ollama and have been reading good things about Llama.cpp. I'm trying to get it set up on Ubuntu 22.04 with driver version 550.120 and CUDA 12.4 installed. I've cloned the repo and tried running: cmake -B build -DGGML\_CUDA=ON However, CMake is unable to find the CUDA toolkit, even though it's installed and \`nvcc\` and \`nvidia-smi\` are working correctly. I've found a lot of potential solutions online, but the complexity seems high. For those who have successfully set up Llama.cpp with CUDA, is it \*significantly\* better than alternatives like Ollama to justify the setup hassle? Is the performance gain substantial? Any straightforward advice or pointers would be greatly appreciated!
2025-04-27T09:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1k903ov/llamacpp_cuda_setup_running_into_issues_is_it/
Brandu33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k903ov
false
null
t3_1k903ov
/r/LocalLLaMA/comments/1k903ov/llamacpp_cuda_setup_running_into_issues_is_it/
false
false
self
11
null
New Audio Model - Kimi-Audio-7B
1
Not associated with developer, just posting as hadn't seen before. Haven't seen this mentioned yet, but please correct me if mistaken and this is a repost. New Audio Model just popped up on Huggingface. Appears to be an Audio Model. Downloading to test now. https://huggingface.co/collections/moonshotai/kimi-audio-7b-680defb21c47be2edf281bd0 "We present Kimi-Audio, an open-source audio foundation model excelling in audio understanding, generation, and conversation. This repository hosts the model checkpoints for Kimi-Audio-7B. Kimi-Audio is designed as a universal audio foundation model capable of handling a wide variety of audio processing tasks within a single unified framework. Key features include: Universal Capabilities: Handles diverse tasks like speech recognition (ASR), audio question answering (AQA), audio captioning (AAC), speech emotion recognition (SER), sound event/scene classification (SEC/ASC) and end-to-end speech conversation. State-of-the-Art Performance: Achieves SOTA results on numerous audio benchmarks (see our Technical Report). Large-Scale Pre-training: Pre-trained on over 13 million hours of diverse audio data (speech, music, sounds) and text data. Novel Architecture: Employs a hybrid audio input (continuous acoustic + discrete semantic tokens) and an LLM core with parallel heads for text and audio token generation. Efficient Inference: Features a chunk-wise streaming detokenizer based on flow matching for low-latency audio generation. For more details, please refer to our GitHub Repository and Technical Report."
2025-04-27T10:25:26
https://www.reddit.com/r/LocalLLaMA/comments/1k90tkr/new_audio_model_kimiaudio7b/
BreakIt-Boris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k90tkr
false
null
t3_1k90tkr
/r/LocalLLaMA/comments/1k90tkr/new_audio_model_kimiaudio7b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Y4wt20051DOtIgO5YDBhv5KTerccg-aHARrSx4x7FCw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=108&crop=smart&auto=webp&s=5786b5c6be58b903215a786f2405536f4f2c2e1e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=216&crop=smart&auto=webp&s=e04c6b1a656c81fdc299dec415af0f887c883e8f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=320&crop=smart&auto=webp&s=08c9cd57acd99c278fb18db9246f5381f2b420c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=640&crop=smart&auto=webp&s=3d4ef37508722dab0b989ad78cba7f953532b226', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=960&crop=smart&auto=webp&s=be302bda691d3dab416ada525edbd0392f74168b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?width=1080&crop=smart&auto=webp&s=720d19cafd49a975eaecb1263d27c0b575e88aca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/f9NqfA2e386icrFr6Ix2X0vTIKryz4QIKLicvrJG_4U.jpg?auto=webp&s=356bece3a949c7609105228189d741cbdf8a8c2e', 'width': 1200}, 'variants': {}}]}
I'm building "Gemini Coder" enabling free AI coding using web chats like AI Studio, DeepSeek or Open WebUI
184
Some web chats come with extended support with automatically set model, system instructions and temperature (AI Studio, OpenRouter Chat, Open WebUI) while integration with others (ChatGPT, Claude, Gemini, Mistral, etc.) is limited to just initializations. [https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder](https://marketplace.visualstudio.com/items?itemName=robertpiosik.gemini-coder) The tool is 100% free and open source (MIT licensed). I hope it will be received by the community as a helpful resource supporting everyday coding.
2025-04-27T10:26:37
https://v.redd.it/n2iwzxx1scxe1
robertpiosik
v.redd.it
1970-01-01T00:00:00
0
{}
1k90u7f
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/n2iwzxx1scxe1/DASHPlaylist.mpd?a=1748341613%2CZThmM2MyM2U0ZDYxYWZkZTZiYjI1OTEwZjJiZGQ4MmU1NDliZjIxNmI0Mzg0ZTU4NmE3MGMxNmUwMTQ0NTEyZQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/n2iwzxx1scxe1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/n2iwzxx1scxe1/HLSPlaylist.m3u8?a=1748341613%2COWVlMWI0MDM1YWJhZWUyOTMyMDA4MTE0NWQ2NmYxOTY2NWUzMmZlY2YzOWI2MzlmZjk5NmRhNDFkNWMyMmFkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n2iwzxx1scxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 840}}
t3_1k90u7f
/r/LocalLLaMA/comments/1k90u7f/im_building_gemini_coder_enabling_free_ai_coding/
false
false
https://external-preview…22c9b0cd196e2648
184
{'enabled': False, 'images': [{'id': 'M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=108&crop=smart&format=pjpg&auto=webp&s=4513b0706667208a83a37e81dc00c3d427095b5b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=216&crop=smart&format=pjpg&auto=webp&s=c73715b13f963bf4052a3dcb9f633408dbdd720a', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=320&crop=smart&format=pjpg&auto=webp&s=d2baa796a6ab1334a701e5f71dff208bed345296', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?width=640&crop=smart&format=pjpg&auto=webp&s=194b68f97374e3e4cb0db2902625aa33d0fa11c3', 'width': 640}], 'source': {'height': 538, 'url': 'https://external-preview.redd.it/M29waGh5eDFzY3hlMQztlUbXx4HmVNCbcZJT8RQ5gRkadeJUcw9XZgxVzRZr.png?format=pjpg&auto=webp&s=c7932821346b467a64830b27e74737d807ff10fa', 'width': 940}, 'variants': {}}]}
What UI is he using? Looks like ComfyUI but for text?
7
I am not sure if it's not just a mockup workflow. Found that on someone's page where he offers LLM services such as building AI agents. https://preview.redd.it/m9k29nnpxcxe1.jpg?width=1600&format=pjpg&auto=webp&s=2fa0c0ea31d622e87a29141d91b6b51f03593c4c And if it doesn't exist as an UI, it should.
2025-04-27T10:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1k916b3/what_ui_is_he_using_looks_like_comfyui_but_for/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k916b3
false
null
t3_1k916b3
/r/LocalLLaMA/comments/1k916b3/what_ui_is_he_using_looks_like_comfyui_but_for/
false
false
https://b.thumbs.redditm…60je3K7j8ovo.jpg
7
null
Deep research on local documents
2
Do you have suggestions for a self-hosted solution that can run deep-research on a couple thousand local text files and create a report from its findings?
2025-04-27T11:20:59
https://www.reddit.com/r/LocalLLaMA/comments/1k91ofu/deep_research_on_local_documents/
bolhaskutya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k91ofu
false
null
t3_1k91ofu
/r/LocalLLaMA/comments/1k91ofu/deep_research_on_local_documents/
false
false
self
2
null
[Tool] GPU Price Tracker
41
Hi everyone! I wanted to share a tool I've developed that might help many of you with hardware purchasing decisions for running local LLMs. # GPU Price Tracker Overview I built a comprehensive GPU Price Tracker that monitors current prices, specifications, and historical price trends for GPUs. This tool is specifically designed to help make informed decisions when selecting hardware for AI workloads, including running LocalLLaMA models. **Tool URL:** [https://www.unitedcompute.ai/gpu-price-tracker](https://www.unitedcompute.ai/gpu-price-tracker) # Key Features: * **Daily Market Prices** \- Daily updated pricing data * **Complete Price History** \- Track price fluctuations since release date * **Performance Metrics** \- FP16 TFLOPS performance data * **Efficiency Metrics**: * **FL/$** \- FLOPS per dollar (value metric) * **FL/Watt** \- FLOPS per watt (efficiency metric) * **Hardware Specifications**: * VRAM capacity and bus width * Power consumption (Watts) * Memory bandwidth * Release date # Example Insights The data reveals some interesting trends: * The NVIDIA A100 40GB PCIe remains at a premium price point ($7,999.99) but offers 77.97 TFLOPS with 0.010 TFLOPS/$ * The RTX 3090 provides better value at $1,679.99 with 35.58 TFLOPS and 0.021 TFLOPS/$ * Price fluctuations can be significant - as shown in the historical view below, some GPUs have varied by over $2,000 in a single year # How This Helps LocalLLaMA Users When selecting hardware for running local LLMs, there are multiple considerations: 1. **Raw Performance** \- FP16 TFLOPS for inference speed 2. **VRAM Requirements** \- For model size limitations 3. **Value** \- FL/$ for budget-conscious decisions 4. **Power Efficiency** \- FL [GPU Price Tracker Main View \(example for 3090\)](https://preview.redd.it/ymez54ch5dxe1.png?width=1418&format=png&auto=webp&s=f481589051d4240120e5f378ac1287aa95b3638d)
2025-04-27T11:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1k91xpn/tool_gpu_price_tracker/
yachty66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k91xpn
false
null
t3_1k91xpn
/r/LocalLLaMA/comments/1k91xpn/tool_gpu_price_tracker/
false
false
https://b.thumbs.redditm…dgJ8EHM3II9k.jpg
41
null
Satirical prompt battle between OpenAI and xAI, tech infused: ARC AGI, symbolic AI, MMMLU, and more
1
2025-04-27T11:49:16
https://www.youtube.com/watch?v=M_VAQGSDWu4
deepartist42
youtube.com
1970-01-01T00:00:00
0
{}
1k9253c
false
{'oembed': {'author_name': 'FamilyVerse', 'author_url': 'https://www.youtube.com/@familyverseshow', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/M_VAQGSDWu4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The backstory between Sam Altman vs Elon Musk and how they settled the feud | FamilyVerse"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/M_VAQGSDWu4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The backstory between Sam Altman vs Elon Musk and how they settled the feud | FamilyVerse', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1k9253c
/r/LocalLLaMA/comments/1k9253c/satirical_prompt_battle_between_openai_and_xai/
false
false
https://a.thumbs.redditm…5Z89C0VzdIL8.jpg
1
{'enabled': False, 'images': [{'id': 'c_GkXY7GxcwM7MBv6BVusYanqxd8TaLnMMS9SwTuE-E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/duGjtWncjlhQGNrhPJrpqUhySq2MgfLGplSL2-yc12g.jpg?width=108&crop=smart&auto=webp&s=30cf0554c15e4ac1a05d90c6c58cdeb21f2bf7c1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/duGjtWncjlhQGNrhPJrpqUhySq2MgfLGplSL2-yc12g.jpg?width=216&crop=smart&auto=webp&s=7d8a548dc1166824b7429d08b3eaa2f2875c3cac', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/duGjtWncjlhQGNrhPJrpqUhySq2MgfLGplSL2-yc12g.jpg?width=320&crop=smart&auto=webp&s=c4b814a6ee46b0236fa85a432f0a2bf2a0825a98', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/duGjtWncjlhQGNrhPJrpqUhySq2MgfLGplSL2-yc12g.jpg?auto=webp&s=09fe5dcb6a5bd01224debda450ad2b75202aca6d', 'width': 480}, 'variants': {}}]}
You are not alone
5
2025-04-27T12:04:00
https://i.redd.it/dc4tibz7bdxe1.jpeg
Temporary-Size7310
i.redd.it
1970-01-01T00:00:00
0
{}
1k92e76
false
null
t3_1k92e76
/r/LocalLLaMA/comments/1k92e76/you_are_not_alone/
false
false
https://b.thumbs.redditm…3oMM57NYvDro.jpg
5
{'enabled': True, 'images': [{'id': 'pZugnbCdCUB99MFjKJlQYEi75k8Qyosq96NxgKkiVPw', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/dc4tibz7bdxe1.jpeg?width=108&crop=smart&auto=webp&s=ab2edcbe82f71ee1bed64d228d01777aae09cf29', 'width': 108}, {'height': 304, 'url': 'https://preview.redd.it/dc4tibz7bdxe1.jpeg?width=216&crop=smart&auto=webp&s=be8c578dad2bebe752f690dd1e1ab296a9cf0b69', 'width': 216}, {'height': 450, 'url': 'https://preview.redd.it/dc4tibz7bdxe1.jpeg?width=320&crop=smart&auto=webp&s=dbab216aea282a65473e1c7271738a235504382a', 'width': 320}], 'source': {'height': 704, 'url': 'https://preview.redd.it/dc4tibz7bdxe1.jpeg?auto=webp&s=bbe6b2875a5f1ec11771e787717fa935c4969dfc', 'width': 500}, 'variants': {}}]}
How know fine-tune local llm well working
1
[removed]
2025-04-27T12:07:23
https://www.reddit.com/r/LocalLLaMA/comments/1k92g8e/how_know_finetune_local_llm_well_working/
Key-Painting2862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k92g8e
false
null
t3_1k92g8e
/r/LocalLLaMA/comments/1k92g8e/how_know_finetune_local_llm_well_working/
false
false
self
1
null
Seeking Help on Training Local LLM with New Data (PDF, DOC, TXT Files)
1
[removed]
2025-04-27T12:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1k92knt/seeking_help_on_training_local_llm_with_new_data/
Delicious-Affect-772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k92knt
false
null
t3_1k92knt
/r/LocalLLaMA/comments/1k92knt/seeking_help_on_training_local_llm_with_new_data/
false
false
self
1
null
FULL LEAKED v0 System Prompts and Tools [UPDATED]
0
(Latest system prompt: 27/04/2025) I managed to get FULL updated v0 system prompt and internal tools info. Over 500 lines You can it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
2025-04-27T12:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1k92p1m/full_leaked_v0_system_prompts_and_tools_updated/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k92p1m
false
null
t3_1k92p1m
/r/LocalLLaMA/comments/1k92p1m/full_leaked_v0_system_prompts_and_tools_updated/
false
false
self
0
{'enabled': False, 'images': [{'id': 'DQg2Sr3B0xsibn9-9Su1mkbqA6z81hncHhRZADshzaI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=108&crop=smart&auto=webp&s=1b199399acdd967050df3fe35efd17d7a6924b41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=216&crop=smart&auto=webp&s=5252434e9ab05bfd6d3944485f73d436797ba382', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=320&crop=smart&auto=webp&s=ff497aede5ebfe19627ce43ac89b176f531fa6f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=640&crop=smart&auto=webp&s=392fa9799348b84af4b3144bb039c8f14285a43c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=960&crop=smart&auto=webp&s=0562a4f76899f6b6916c447a2c733ce0ef14b2f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?width=1080&crop=smart&auto=webp&s=0299322e6cb8b554924134787e98a0918d4da16e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O4CNYRlMRkyu90dyTb9eLlrdjcs_mGIjgAcmyBs51jk.jpg?auto=webp&s=07be4cac012c9340bc2ce26e6c1e90b69812a4ca', 'width': 1200}, 'variants': {}}]}
Seeking Help on Training Local LLM with New Data (PDF, DOC, TXT Files)
1
[removed]
2025-04-27T12:21:59
https://www.reddit.com/r/LocalLLaMA/comments/1k92pbj/seeking_help_on_training_local_llm_with_new_data/
Just_Repeat6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k92pbj
false
null
t3_1k92pbj
/r/LocalLLaMA/comments/1k92pbj/seeking_help_on_training_local_llm_with_new_data/
false
false
self
1
null
Building a chatbot for climate change, groq vs google cloud?
0
hi everyone! im building a chatbot which would require RAG pipeline to external data and will also fetch data from google earth engine etc and would give some detailed insight about climate change. In such a case, assuming we have around 100 queries/day what would be better : using deepseek/llama api from groq w RAG or fine-tuning the model on climate based data w RAG & deploying it on Google cloud? What would be less costly and more sustainable for the future?
2025-04-27T12:34:04
https://www.reddit.com/r/LocalLLaMA/comments/1k92x7v/building_a_chatbot_for_climate_change_groq_vs/
androme-da
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k92x7v
false
null
t3_1k92x7v
/r/LocalLLaMA/comments/1k92x7v/building_a_chatbot_for_climate_change_groq_vs/
false
false
self
0
null
Has anyone successfully used local models with n8n, Ollama and MCP tools/servers?
11
I'm trying to set up an n8n workflow with Ollama and MCP servers (specifically Google Tasks and Calendar), but I'm running into issues with JSON parsing from the tool responses. My AI Agent node keeps returning the error "Non string tool message content is not supported" when using local models From what I've gathered, this seems to be a common issue with Ollama and local models when handling MCP tool responses. I've tried several approaches but haven't found a solution that works. Has anyone successfully: \- Used a local model through Ollama with n8n's AI Agent node \- Connected it to MCP servers/tools \- Gotten it to properly parse JSON responses If so: 1. Which specific model worked for you? 2. Did you need any special configuration or workarounds? 3. Any tips for handling the JSON responses from MCP tools? I've seen that OpenAI models work fine with this setup, but I'm specifically looking to keep everything local. According to some posts I've found, there might be certain models that handle tool calling better than others, but I haven't found specific recommendations. Any guidance would be greatly appreciated!
2025-04-27T13:06:05
https://www.reddit.com/r/LocalLLaMA/comments/1k93ipk/has_anyone_successfully_used_local_models_with/
onicarps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k93ipk
false
null
t3_1k93ipk
/r/LocalLLaMA/comments/1k93ipk/has_anyone_successfully_used_local_models_with/
false
false
self
11
null
Got Sesame CSM working with a real time factor of .6x with a 4070Ti Super!
34
https://github.com/ReisCook/VoiceAssistant Still have more work to do but it’s functional. Having an issue where the output gets cut off prematurely atm
2025-04-27T13:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1k93kfh/got_sesame_csm_working_with_a_real_time_factor_of/
DumaDuma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k93kfh
false
null
t3_1k93kfh
/r/LocalLLaMA/comments/1k93kfh/got_sesame_csm_working_with_a_real_time_factor_of/
false
false
self
34
{'enabled': False, 'images': [{'id': 'm4aE5aedl_HN6Uv6LFJvvz2ahfqS3UB-nxqGx6ssVi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=108&crop=smart&auto=webp&s=0b2a699c9f2066c91adc5cc980d5018bace1421c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=216&crop=smart&auto=webp&s=028175b74fee33e72673350b1e12ae87f6983b72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=320&crop=smart&auto=webp&s=e3b343089a2b33d3e2ebda9348e905612bf838f7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=640&crop=smart&auto=webp&s=f2dec6ec72c73e7a7b83e9261fedb3c30707cfe2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=960&crop=smart&auto=webp&s=4214d7ee74e531446727a5756710b9b765fb870b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?width=1080&crop=smart&auto=webp&s=a56e03a8bea1d79ddef698fe9daeaa256b38e4c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-BCcczGfuya07KeBIhnlfLswPGLxa8CdSSrebS12yVE.jpg?auto=webp&s=caea7b02a8b82012a0e8cffa060e72ecf8e8629b', 'width': 1200}, 'variants': {}}]}
Evaluating browser-use to build workflows for QA-automation for myself
5
I keep attempting large refactors in my codebase. Cannot bother the QA team for the same to test "everything" given the blast radius. In addition to unit tests, i'd like to perform e2e tests with a real browser, and its been taxing to do so much manual work. Is browser-use worth investing my workflows in? hows your experience been? any alternatives thats worth pouring a couple of weeks over?
2025-04-27T13:31:58
https://www.reddit.com/r/LocalLLaMA/comments/1k941b1/evaluating_browseruse_to_build_workflows_for/
SmoothCCriminal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k941b1
false
null
t3_1k941b1
/r/LocalLLaMA/comments/1k941b1/evaluating_browseruse_to_build_workflows_for/
false
false
self
5
null
Gemini 2.5-Pro's biggest strength isn't raw coding skill - it's that it doesn't degrade anywhere near as much over long context
410
TL;DR: It's such a crazy unlock being able to just keep on iterating and trying new things without having to reset the chat window every 15 minutes. Just wish they'd pass whatever arcane magic they used down to the Gemma models! \-- So I've been using Cursor pretty religiously ever since Sonnet 3.5 dropped. I don't necessarily think that Gemini 2.5 is necessarily *better* than Sonnet 3.5 though, at least not over a single shot prompt. I think it's biggest strength is that even once my context window has been going on forever, it's still consistently smart. Honestly I'd take a dumber version of Sonnet 3.7 if it meant that it was that same level of dumbness over the whole context window. Same even goes for local LLMs. If I had a version of Qwen, even just a 7b, that didn't slowly get less capable with a longer context window, I'd honestly use it so much more. So much of the time I've just got into a flow with a model, just fed it enough context that it manages to actually do what I want it to, and then 2 or 3 turns later it's suddenly lost that spark. Gemini 2.5 is the only model I've used so far to not do that, even amongst all of Google's other offerings. Is there some specific part of the attention / arch for Gemini that has enabled this, do we reckon? Or did they just use all those TPUs to do a *really* high number of turns for multi-turn RL? My gut says probably the latter lol
2025-04-27T13:41:32
https://www.reddit.com/r/LocalLLaMA/comments/1k9488r/gemini_25pros_biggest_strength_isnt_raw_coding/
mark-lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9488r
false
null
t3_1k9488r
/r/LocalLLaMA/comments/1k9488r/gemini_25pros_biggest_strength_isnt_raw_coding/
false
false
self
410
null
Server approved! 4xH100 (320gb vram). Looking for advice
42
My company is wanting to run on premise AI for various reasons. We have a HPC cluster built using slurm, and it works well, but the time based batch jobs are not ideal for always available resources. I have a good bit of experience running vllm, llamacpp, and kobold in containers with GPU enabled resources, and I am decently proficient with kubernetes. (Assuming this all works, I will be asking for another one of these servers for HA workloads.) My current idea is going to be a k8s based deployment (using RKE2), with the nvidia gpu operator installed for the single worker node. I will then use gitlab + fleet to handle deployments, and track configuration changes. I also want to use quantized models, probably Q6-Q8 imatrix models when possible with llamacpp, or awq/bnb models with vllm if they are supported. I will also use a litellm deployment on a different k8s cluster to connect the openai compatible endpoints. (I want this on a separate cluster, as i can then use the slurm based hpc as a backup in case the node goes down for now, and allow requests to keep flowing.) I think got the basics this will work, but I have never deployed an H100 based server, and I was curious if there were any gotchas I might be missing.... Another alternative I was thinking about, was adding the H100 server as a hypervisor node, and then use GPU pass-through to a guest. This would allow some modularity to the possible deployments, but would add some complexity.... Thank you for reading! Hopefully this all made sense, and I am curious if there are some gotchas or some things I could learn from others before deploying or planning out the infrastructure.
2025-04-27T15:14:44
https://www.reddit.com/r/LocalLLaMA/comments/1k969p3/server_approved_4xh100_320gb_vram_looking_for/
ICanSeeYou7867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k969p3
false
null
t3_1k969p3
/r/LocalLLaMA/comments/1k969p3/server_approved_4xh100_320gb_vram_looking_for/
false
false
self
42
null
Best method of quantizing Gemma 3 for use with vLLM?
10
I've sort of been tearing out my hair trying to figure this out. I want to use the new Gemma 3 27B models with vLLM, specifically the QAT models, but the two easiest ways to quantize something (GGUF, BnB) are not optimized in vLLM and the performance degradation is pretty drastic. vLLM seems to be optimized for GPTQModel and AWQ, but neither seem to have strong Gemma 3 support right now. Notably, GPTQModel doesn't work with multimodal Gemma 3, and the process of making the 27b model text-only and then quantizing it has proven tricky for various reasons. GPTQ compression seems possible given this model: https://huggingface.co/ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g but they did that on the original 27B, not the unquantized QAT model. For the life of me I haven't been able to make this work, and it's driving me nuts. Any advice from more experienced users?
2025-04-27T15:40:02
https://www.reddit.com/r/LocalLLaMA/comments/1k96ur9/best_method_of_quantizing_gemma_3_for_use_with/
Saguna_Brahman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k96ur9
false
null
t3_1k96ur9
/r/LocalLLaMA/comments/1k96ur9/best_method_of_quantizing_gemma_3_for_use_with/
false
false
self
10
{'enabled': False, 'images': [{'id': '_R4EN0yNgWWZuwtwft8PIgGZVNvSybay3wSjmiD_mgU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=108&crop=smart&auto=webp&s=f4fc8c1748ec68b0a630a50ce95d48fbe5b5c86c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=216&crop=smart&auto=webp&s=534ab89f64e52020f99f2e8d40517acaceac92a8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=320&crop=smart&auto=webp&s=f7814882d1f7bd71119b28490794bd5cb3710d02', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=640&crop=smart&auto=webp&s=c3112c81a291be0f5d2b62a0fc53acbfa41c82a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=960&crop=smart&auto=webp&s=2bf67c8c32d5857a6ea696974272a661c9fa43ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?width=1080&crop=smart&auto=webp&s=1dd32e400a5548e7b166b7dc67791df40f17d6b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/olZNG_bH_L9FqrlBBSvyi2x6eQo-upKCP6fd6HaObBs.jpg?auto=webp&s=e997c104288945168aeb717a8104536fa2ce6611', 'width': 1200}, 'variants': {}}]}
Idea: Al which uses low-res video of a person to create authentic 4K portrait
0
I think current image upscalers “dream up” pixels to make things HD. So they add detail that never actually existed. If we want an HD portrait of a person that is completely authentic, maybe AI can sample many frames of a low-res video to generate a completely authentic portrait? Each frame of a video can reveal small details of the face that didn’t exist in the previous frames. I feel like that’s how my brain naturally works when I watch a low-res video of a person. My brain builds a clearer image of that persons face as the video progresses. This could be very useful to make things like “wanted posters” of a suspect from grainy surveillance videos. We probably shouldn’t use existing upscaling tools for this because they add detail that may not actually be there. I’m sure there are many other cool potential use cases.
2025-04-27T15:51:29
https://www.reddit.com/r/LocalLLaMA/comments/1k974bl/idea_al_which_uses_lowres_video_of_a_person_to/
GravyPoo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k974bl
false
null
t3_1k974bl
/r/LocalLLaMA/comments/1k974bl/idea_al_which_uses_lowres_video_of_a_person_to/
false
false
self
0
null
Finetuned Llama 3.1 8B Instruct but don't know how to control the output
1
[removed]
2025-04-27T16:54:25
https://www.reddit.com/r/LocalLLaMA/comments/1k98l9u/finetuned_llama_31_8b_instruct_but_dont_know_how/
Banda_e_Hur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k98l9u
false
null
t3_1k98l9u
/r/LocalLLaMA/comments/1k98l9u/finetuned_llama_31_8b_instruct_but_dont_know_how/
false
false
self
1
null
AMD thinking of cancelling 9060XT and focusing on a 16gb vram card
30
As an AMD fanboy ( I know. wrong hobby for me), interested to see where this goes. And how much it will cost.
2025-04-27T16:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1k98ooy/amd_thinking_of_cancelling_9060xt_and_focusing_on/
thebadslime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k98ooy
false
null
t3_1k98ooy
/r/LocalLLaMA/comments/1k98ooy/amd_thinking_of_cancelling_9060xt_and_focusing_on/
false
false
self
30
null
Are there any reasoning storytelling/roleplay models that use deepseek level reasoning to avoid plot holes and keep it realistic?
6
I tried deepseek when it first came out but it was awful at it.
2025-04-27T17:25:34
https://www.reddit.com/r/LocalLLaMA/comments/1k99c5p/are_there_any_reasoning_storytellingroleplay/
No-Issue-9136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k99c5p
false
null
t3_1k99c5p
/r/LocalLLaMA/comments/1k99c5p/are_there_any_reasoning_storytellingroleplay/
false
false
self
6
null
GPT4o’s update is absurdly dangerous to release to a billion active users; Someone is going end up dead.
0
2025-04-27T17:35:11
https://i.redd.it/epkemh6byexe1.jpeg
Trevor050
i.redd.it
1970-01-01T00:00:00
0
{}
1k99khn
false
null
t3_1k99khn
/r/LocalLLaMA/comments/1k99khn/gpt4os_update_is_absurdly_dangerous_to_release_to/
false
false
https://a.thumbs.redditm…LxYrKTETAj18.jpg
0
{'enabled': True, 'images': [{'id': 'ZWQiwaCEbJX61myfAC33hUx3GF0Veoq1TShJZaAAYls', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/epkemh6byexe1.jpeg?width=108&crop=smart&auto=webp&s=4f80842d9e6443be8df681d319f2b01e38a2541b', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/epkemh6byexe1.jpeg?width=216&crop=smart&auto=webp&s=99f9256595e75b6039282ba1409b2c4840013f35', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/epkemh6byexe1.jpeg?width=320&crop=smart&auto=webp&s=435382d9d3d2b80e5a003bee0818adabf0fb2251', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/epkemh6byexe1.jpeg?width=640&crop=smart&auto=webp&s=8e22d91df057fc4af6cf0c2a8c90bc8610afe051', 'width': 640}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/epkemh6byexe1.jpeg?auto=webp&s=455a9d05ac22510228177b5d3132416fd3b5c3f3', 'width': 945}, 'variants': {}}]}
Best Gemini 2.5 Pro open weight option for coding?
0
What's closest to Gemini 2.5 Pro open weight option today for coding?
2025-04-27T18:04:11
https://www.reddit.com/r/LocalLLaMA/comments/1k9a9t0/best_gemini_25_pro_open_weight_option_for_coding/
NaiRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9a9t0
false
null
t3_1k9a9t0
/r/LocalLLaMA/comments/1k9a9t0/best_gemini_25_pro_open_weight_option_for_coding/
false
false
self
0
null
highCompute.py: OpenAI-like medium and high LLM compute system at home. Only one python file!
1
A single Python file that connects via the OpenAI Chat Completions API, giving you something akin to OpenAI High Compute at home. **Any** models are compatible! Using dynamic programming methods, computational capacity is increased by tens or even hundreds of times for both reasoning and non-reasoning models, significantly improving answer quality and the ability to solve extremely complex tasks for LLMs. # 🌟 Key Features * **Local LLM Integration:** Works with your own LLM server (e.g., llama.cpp, Ollama, LM Studio, vLLM with an OpenAI-compatible endpoint). * **Compute Levels:** * **Low:** Direct query to the LLM for a quick response. This is a standard chat mode. Generates N tokens — for example, solving a task may only consume 700 tokens. * **Medium:** Single-level task decomposition into subtasks, solving them, and synthesizing the final answer. Suitable for moderately complex queries. The number of generated tokens is approximately 10-15x higher compared to Low Compute (average value, depends on the task): if solving a task in Low Compute took 700 tokens, Medium level would require around 7,000 tokens. * **High:** Two-level task decomposition (stages → steps), solving individual steps, synthesizing stage results, and generating the final answer. Designed for highly complex and multi-component tasks. The number of generated tokens is approximately 100-150x higher compared to Low Compute: if solving a task in Low Compute took 700 tokens, High level would require around 70,000 tokens. * **Flexible Compute Adjustment:** You can freely adjust the Compute Level for each query individually. For example, initiate the first query in High Compute, then switch to Low mode, and later use Medium Compute to solve a specific problem mid-chat. Link: [https://github.com/AlexBefest/highCompute.py](https://github.com/AlexBefest/highCompute.py) https://reddit.com/link/1k9b8qg/video/xvby7rskafxe1/player
2025-04-27T18:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1k9b8qg/highcomputepy_openailike_medium_and_high_llm/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9b8qg
false
null
t3_1k9b8qg
/r/LocalLLaMA/comments/1k9b8qg/highcomputepy_openailike_medium_and_high_llm/
false
false
https://b.thumbs.redditm…yNIjAnoVhf3g.jpg
1
{'enabled': False, 'images': [{'id': 'cOGcoH6WRl5CmO_wCZTfm5lMo8xmlt20N8LS6-QoCb0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=108&crop=smart&auto=webp&s=7ada718911b2d04f29daa6a0f8ca4d0c067cb35d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=216&crop=smart&auto=webp&s=bcf3337b5ff01fa15ebdd34a9c7f3ab68b8ba59a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=320&crop=smart&auto=webp&s=d45bac86dc0acc9c0dba07ae2400c7f7523267b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=640&crop=smart&auto=webp&s=b428406be91ee3cccacea6b9a935d4d938bf7220', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=960&crop=smart&auto=webp&s=7898bbcff21f635fbb60c2eadc6912be96750f19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=1080&crop=smart&auto=webp&s=6763f747e8b14deef788b405941af91bf7216fc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?auto=webp&s=c6464b6a62b162aef6ed7703350d45794a8aaa3a', 'width': 1200}, 'variants': {}}]}
highCompute.py: Compute levels on any model at home!
1
# 🌟 Key Features * **Local LLM Integration:** Works with your own LLM server (e.g., llama.cpp, Ollama, LM Studio, vLLM with an OpenAI-compatible endpoint). * **Compute Levels:** * **Low:** Direct query to the LLM for a quick response. This is a standard chat mode. Generates N tokens — for example, solving a task may only consume 700 tokens. * **Medium:** Single-level task decomposition into subtasks, solving them, and synthesizing the final answer. Suitable for moderately complex queries. The number of generated tokens is approximately 10-15x higher compared to Low Compute (average value, depends on the task): if solving a task in Low Compute took 700 tokens, Medium level would require around 7,000 tokens. * **High:** Two-level task decomposition (stages → steps), solving individual steps, synthesizing stage results, and generating the final answer. Designed for highly complex and multi-component tasks. The number of generated tokens is approximately 100-150x higher compared to Low Compute: if solving a task in Low Compute took 700 tokens, High level would require around 70,000 tokens. * **Flexible Compute Adjustment:** You can freely adjust the Compute Level for each query individually. For example, initiate the first query in High Compute, then switch to Low mode, and later use Medium Compute to solve a specific problem mid-chat. Link: [https://github.com/AlexBefest/highCompute.py](https://github.com/AlexBefest/highCompute.py) https://reddit.com/link/1k9bc82/video/di3lftsjbfxe1/player highCompute.py: Compute levels on any model at home!
2025-04-27T18:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1k9bc82/highcomputepy_compute_levels_on_any_model_at_home/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bc82
false
null
t3_1k9bc82
/r/LocalLLaMA/comments/1k9bc82/highcomputepy_compute_levels_on_any_model_at_home/
false
false
https://b.thumbs.redditm…yNIjAnoVhf3g.jpg
1
{'enabled': False, 'images': [{'id': 'cOGcoH6WRl5CmO_wCZTfm5lMo8xmlt20N8LS6-QoCb0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=108&crop=smart&auto=webp&s=7ada718911b2d04f29daa6a0f8ca4d0c067cb35d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=216&crop=smart&auto=webp&s=bcf3337b5ff01fa15ebdd34a9c7f3ab68b8ba59a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=320&crop=smart&auto=webp&s=d45bac86dc0acc9c0dba07ae2400c7f7523267b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=640&crop=smart&auto=webp&s=b428406be91ee3cccacea6b9a935d4d938bf7220', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=960&crop=smart&auto=webp&s=7898bbcff21f635fbb60c2eadc6912be96750f19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=1080&crop=smart&auto=webp&s=6763f747e8b14deef788b405941af91bf7216fc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?auto=webp&s=c6464b6a62b162aef6ed7703350d45794a8aaa3a', 'width': 1200}, 'variants': {}}]}
Lack of Model Compatibility Can Kill Promising Projects
119
I'm currently using the **GLM-4 32B 0414 MLX** on **LM Studio**, and I have to say, the experience has been excellent. When it comes to coding tasks, it feels clearly better than the **QWen-32B**. For general text and knowledge tasks, in my tests, I still prefer the **Mistral-Small 24B**. What I really want to highlight is this: just a few days ago, there were tons of requests for a good local LLM that could handle coding well — and, surprisingly, that breakthrough had already happened! However, the lack of compatibility with popular tools (like **llama.cpp** and others) slowed down adoption. With few people testing and little exposure, models that could have generated a lot of buzz, usage, and experiments end up quietly fading away. **The GLM-4 developers deserve huge praise for their amazing work** — the model itself is great. But it's truly a shame that the lack of integration with common tools hurt its launch so much. They deserve way more recognition. We saw something similar happen with **Llama 4**: now, some users are starting to say "it wasn’t actually that bad," but by then the bad reputation had already stuck, mostly because it launched quickly with a lot of integration bugs. I know it might sound a bit arrogant to say this to the teams who dedicate so much time to build these models — and offer them to us for free — but honestly: **paying attention to tool compatibility can be the difference between a massively successful project and one that gets forgotten**.
2025-04-27T18:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1k9bdlh/lack_of_model_compatibility_can_kill_promising/
hannibal27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bdlh
false
null
t3_1k9bdlh
/r/LocalLLaMA/comments/1k9bdlh/lack_of_model_compatibility_can_kill_promising/
false
false
self
119
null
Personalization in chatbots creating thought bubbles?
3
Starting to get a bit worried about AI companions getting too personalized, they end up just agreeing with whatever distorted views or fringe ideas someone has. With how good these models are at sounding convincing, I feel like it could seriously mess with society and push people even deeper into their own bubbles. Polarization’s just gonna get worse. Anyone else noticing how 4o feels way too agreeable now, no matter what the user believes? Do you find that worrisome?
2025-04-27T18:54:42
https://www.reddit.com/r/LocalLLaMA/comments/1k9bg2l/personalization_in_chatbots_creating_thought/
PizzaCatAm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bg2l
false
null
t3_1k9bg2l
/r/LocalLLaMA/comments/1k9bg2l/personalization_in_chatbots_creating_thought/
false
false
self
3
null
highCompute.py: Compute levels on any model at home!
1
[removed]
2025-04-27T18:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1k9bg4q/highcomputepy_compute_levels_on_any_model_at_home/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bg4q
false
null
t3_1k9bg4q
/r/LocalLLaMA/comments/1k9bg4q/highcomputepy_compute_levels_on_any_model_at_home/
false
false
https://b.thumbs.redditm…WzdNmi8Xb2vE.jpg
1
null
Compute levels on any model at home!
1
[removed]
2025-04-27T18:57:05
https://www.reddit.com/r/LocalLLaMA/comments/1k9bi26/compute_levels_on_any_model_at_home/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bi26
false
null
t3_1k9bi26
/r/LocalLLaMA/comments/1k9bi26/compute_levels_on_any_model_at_home/
false
false
https://b.thumbs.redditm…LgUxZgGE_l4A.jpg
1
null
Compute levels on any model at home!
1
[removed]
2025-04-27T19:02:24
https://www.reddit.com/r/LocalLLaMA/comments/1k9bmq8/compute_levels_on_any_model_at_home/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bmq8
false
null
t3_1k9bmq8
/r/LocalLLaMA/comments/1k9bmq8/compute_levels_on_any_model_at_home/
false
false
https://b.thumbs.redditm…yNIjAnoVhf3g.jpg
1
{'enabled': False, 'images': [{'id': 'cOGcoH6WRl5CmO_wCZTfm5lMo8xmlt20N8LS6-QoCb0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=108&crop=smart&auto=webp&s=7ada718911b2d04f29daa6a0f8ca4d0c067cb35d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=216&crop=smart&auto=webp&s=bcf3337b5ff01fa15ebdd34a9c7f3ab68b8ba59a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=320&crop=smart&auto=webp&s=d45bac86dc0acc9c0dba07ae2400c7f7523267b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=640&crop=smart&auto=webp&s=b428406be91ee3cccacea6b9a935d4d938bf7220', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=960&crop=smart&auto=webp&s=7898bbcff21f635fbb60c2eadc6912be96750f19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?width=1080&crop=smart&auto=webp&s=6763f747e8b14deef788b405941af91bf7216fc1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EegvbrSFWFTqDS51Vf466E6tuVzYVta36TStla3fhfc.jpg?auto=webp&s=c6464b6a62b162aef6ed7703350d45794a8aaa3a', 'width': 1200}, 'variants': {}}]}
Compute levels on any model at home!
1
[removed]
2025-04-27T19:05:09
https://www.reddit.com/r/LocalLLaMA/comments/1k9bp3s/compute_levels_on_any_model_at_home/
Vast_Ad9241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bp3s
false
null
t3_1k9bp3s
/r/LocalLLaMA/comments/1k9bp3s/compute_levels_on_any_model_at_home/
false
false
https://b.thumbs.redditm…YOaNfnbfPMsg.jpg
1
null
High-processing level for any model at home! Only one python file!
54
https://reddit.com/link/1k9bwbg/video/pw1tppcrefxe1/player Works with self-hosted LLM servers (e.g., *llama.cpp, Ollama, LM Studio, or vLLM* with an OpenAI-compatible API). # Processing Tiers 1. **Low** * Direct query to the LLM for instant responses. * Best for simple tasks with minimal processing. * Token usage: Low (\~700 tokens for a typical request). 2. **Medium** * Breaks tasks into subtasks, solves them sequentially, and combines results. * Ideal for moderately complex queries. * Token usage: **10–15x Low mode** (\~7,000 tokens for a task that would take 700 in Low). 3. **High** * Two-tier decomposition: **Task → Stages → Steps**, with iterative refinement and synthesis. * Designed for highly complex, multi-part problems. * Token usage: **100–150x Low mode** (\~70,000 tokens for a task that would take 700 in Low). # Dynamic Adjustment Switch tiers **per query** — start in **Advanced**, drop to **Light** for follow-ups, or use **Balanced** mid-conversation. Full control over resource allocation.
2025-04-27T19:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1k9bwbg/highprocessing_level_for_any_model_at_home_only/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9bwbg
false
null
t3_1k9bwbg
/r/LocalLLaMA/comments/1k9bwbg/highprocessing_level_for_any_model_at_home_only/
false
false
self
54
null
Building a Simple Multi-LLM design to Catch Hallucinations and Improve Quality (Looking for Feedback)
31
I was reading newer LLM models are hallucinating more with weird tone shifts and broken logic chains that are getting harder to catch versus easier. (eg, https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/) I’m messing around with an idea with ChatGPT to build a "team" of various LLM models that watch and advise a primary LLM, validating responses and reduceing hallucinations during a conversation. The team would be 3-5 LLM agents that monitor, audit, and improve output by reducing hallucinations, tone drift, logical inconsistencies, and quality degradation. One model would do the main task (generate text, answer questions, etc.) then 2 or 3 "oversight" LLM agents would check the output for issues. If things look sketchy, the team “votes or escalates” the item to the primary LLM agent for corrective action, advice and/or guidance. The goal is to build a relatively simple/inexpensive (~ $200-300/month), mostly open-source solution by using tools like ChatGPT Pro, Gemini Advanced, CrewAI, LangGraph, Zapier, etc. with other top 10 LLM’s as needed, choosing strengths to function. Once out of design and into testing the plan is to run parallel tests with standard tests like TruthfulQA and HaluEval to compare results and see if there is any significant improvements. Questions: (yes… this is a ChatGPT co- conceived solution….) 1. Is this structure and concept realistic, theoretically possible to build and actually work? ChatGPT Is infamous with me creating stuff that’s just not right sometimes so good to catch it early 2. Are there better ways to orchestrate multi-agent QA? 3. Is it reasonable to expect this to work at low infrastructure cost using existing tools like ChatGPT Pro, Gemini Advanced, CrewAI, LangGraph, etc.? I understand API text calls/token cost will be relatively low (~$10.00/day) compared to the service I hope it provides and the open source libraries (CrewAI, LangGraph), Zapier, WordPress, Notion, GPT Custom Instructions are accessible now. 4. Has anyone seen someone try something like this before (even partly)? 5. Any failure traps, risks, oversights? (eg agents hallucinating themselves) 6. Any better ways to structure it? This will be addition to all prompt guidance and best practices followed. 7. Any extra oversight roles I should think about adding? Basically I’m just trying to build a practical tool to tackle hallucinations described in the news and improve conversation quality issues before they get worse. Open to any ideas, critique, references, or stories. Most importantly, I”m just another ChatGPT fantasy I should expect to crash and burn on and should cut my loses now. Thanks for reading.
2025-04-27T19:41:01
https://i.redd.it/dz3w8karkfxe1.jpeg
Reddit_wander01
i.redd.it
1970-01-01T00:00:00
0
{}
1k9cj6j
false
null
t3_1k9cj6j
/r/LocalLLaMA/comments/1k9cj6j/building_a_simple_multillm_design_to_catch/
false
false
https://a.thumbs.redditm…kQXFsb8Jhtj4.jpg
31
{'enabled': True, 'images': [{'id': 'Rkmtni4PZajIE7425swNP7ivd67nISuzauY_2oXiJNU', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?width=108&crop=smart&auto=webp&s=054f9c9c81138bc3486e582435a7a1b3e8842ff8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?width=216&crop=smart&auto=webp&s=a0f6d38a6b81f419cc1f6b7bc5205554ca8746b0', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?width=320&crop=smart&auto=webp&s=1dee2389e215531ccc37e1882efb74d51ec9d38e', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?width=640&crop=smart&auto=webp&s=21f77fd6d89c39d6a16a95956902a83c3cad55aa', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?width=960&crop=smart&auto=webp&s=b1b8a7f546b7568d9b3f31b2e7911401a8d62022', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/dz3w8karkfxe1.jpeg?auto=webp&s=37f16b235eccc483e233bd64ff6d6dae3a32e606', 'width': 1024}, 'variants': {}}]}
Introducing AInfrastructure with MCP: An open-source project I've been working on
1
Hey r/LocalLLaMA I wanted to share a project I've been developing for a while now that some of you might find interesting. It's called **AInfrastructure**, and it's an open-source platform that combines infrastructure monitoring with AI assistance and MCP. # What is it? AInfrastructure is essentially a system that lets you monitor your servers, network devices, and other infrastructure - but with a twist: you can actually chat with your devices through an AI assistant. Think of it as having a conversation with your server to check its status or make changes, rather than digging through logs or running commands. https://preview.redd.it/rg0ppdzgofxe1.png?width=2537&format=png&auto=webp&s=e6233643a5bc9150de8dd046181a85de8f91c76e # Core features: * **Dashboard monitoring** for your infrastructure * **AI chat interface** \- have conversations with your devices * **Plugin system** that lets you define custom device types * **Standard support** for Linux and Windows machines (using Glances) https://preview.redd.it/52uwja6lofxe1.png?width=2550&format=png&auto=webp&s=9c78d8ee9b9cf025ddd38b443095dad551cbf323 The most interesting part, in my opinion, is the plugin system. In AInfrastructure, a plugin isn't just an add-on - it's actually a complete device type definition. You can create a plugin for pretty much any device or service - routers, IoT devices, custom hardware, whatever - and define how to communicate with it. Each plugin can define custom UI elements like buttons, forms, and other controls that are automatically rendered in the frontend. For example, if your plugin defines a "Reboot" action for a router, the UI will automatically show a reboot button when viewing that device. These UI elements are completely customizable - you can specify where they appear, what they look like, and whether they require confirmation. Once your plugin is loaded, those devices automatically become "conversational" through the AI assistant as well. # Current state: Very early alpha This is very much an early alpha release with plenty of rough edges: * The system needs a complete restart after loading any plugin * The Plugin Builder UI is just a concept mockup at this point * There are numerous design bugs, especially in dark mode * The AI doesn't always pass parameters correctly * Code quality is... let's say "work in progress" (you'll find random Hungarian comments in there) # Requirements * It currently only works with OpenAI's models (you need your own API key) * For standard Linux/Windows monitoring, you need to install Glances on your machines # Why I made it I wanted an easier way to manage my home infrastructure without having to remember specific commands or dig through different interfaces. The idea of just asking "Hey, how's my media server doing?" and getting a comprehensive answer was appealing. # What's next? I'm planning to add: * A working Plugin Builder * Actual alerts system * Code cleanup (desperately needed) * Ollama integration for local LLMs * Proactive notifications from devices when something's wrong The source code is available on GitHub if anyone wants to check it out or contribute. It's MIT licensed, so feel free to use it however you like. I'd love to hear your thoughts, suggestions, or if anyone's interested in trying it out, despite its current rough state. I'm not trying to "sell" anything here - just sharing a project I think some folks might find useful or interesting.
2025-04-27T20:02:54
https://www.reddit.com/r/LocalLLaMA/comments/1k9d1qu/introducing_ainfrastructure_with_mcp_an/
n1k0z0r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9d1qu
false
null
t3_1k9d1qu
/r/LocalLLaMA/comments/1k9d1qu/introducing_ainfrastructure_with_mcp_an/
false
false
https://b.thumbs.redditm…wi6L6oemrdVI.jpg
1
null
Help Needed: Splitting Quantized MADLAD-400 3B ONNX
4
Has anyone in the community already created these specific split MADLAD ONNX components (`embed`, `cache_initializer`) for mobile use? I don't have access to Google Colab Pro or a local machine with enough RAM (32GB+ recommended) to run the necessary ONNX manipulation scripts would anyone with the necessary high-RAM compute resources be willing to help to run the script?
2025-04-27T20:03:14
https://www.reddit.com/r/LocalLLaMA/comments/1k9d20u/help_needed_splitting_quantized_madlad400_3b_onnx/
Away_Expression_3713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9d20u
false
null
t3_1k9d20u
/r/LocalLLaMA/comments/1k9d20u/help_needed_splitting_quantized_madlad400_3b_onnx/
false
false
self
4
null
why does apple hate llms?
0
I don't understand it, they have the capacity and the data but somehow they're acting as if llms are gonna fade away.
2025-04-27T20:24:59
https://www.reddit.com/r/LocalLLaMA/comments/1k9dk7z/why_does_apple_hate_llms/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9dk7z
false
null
t3_1k9dk7z
/r/LocalLLaMA/comments/1k9dk7z/why_does_apple_hate_llms/
false
false
self
0
null
Gemma3 performance on Ryzen AI MAX
13
Hello everyone, I'm planning to set up a system to run large language models locally, primarily for privacy reasons, as I want to avoid cloud-based solutions. The specific models I'm most interested in for my project are Gemma 3 (12B or 27B versions, ideally Q4-QAT quantization) and Mistral Small 3.1 (in Q8 quantization). I'm currently looking into Mini PCs equipped with AMD Ryzen AI MAX APU These seem like a promising balance of size, performance, and power efficiency. Before I invest, I'm trying to get a realistic idea of the performance I can expect from this type of machine. My most critical requirement is performance when using a very large context window, specifically around 32,000 tokens. Are there any users here who are already running these models (or models of a similar size and quantization, like Mixtral Q4/Q8, etc.) on a Ryzen AI Mini PC? If so, could you please share your experiences? I would be extremely grateful for any information you can provide on: * Your exact Mini PC model and the specific Ryzen processor it uses. * The amount and speed of your RAM, as this is crucial for the integrated graphics (VRAM). * The general inference performance you're getting (e.g., tokens per second), especially if you have tested performance with an extended context (if you've gone beyond the typical 4k or 8k, that information would be invaluable!). * Which software or framework you are using (such as Llama.cpp, Oobabooga, LM Studio, etc.). * Your overall feeling about the fluidity and viability of using your machine for this specific purpose with large contexts. I fully understand that running a specific benchmark with a 32k context might be time-consuming or difficult to arrange, so any feedback at all – even if it's not a precise 32k benchmark but simply gives an indication of the machine's ability to handle larger contexts – would be incredibly helpful in guiding my decision. Thank you very much in advance to anyone who can share their experience!
2025-04-27T20:50:55
https://www.reddit.com/r/LocalLLaMA/comments/1k9e5p0/gemma3_performance_on_ryzen_ai_max/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9e5p0
false
null
t3_1k9e5p0
/r/LocalLLaMA/comments/1k9e5p0/gemma3_performance_on_ryzen_ai_max/
false
false
self
13
null
TabbyAPI error after new installation
2
Friends, please help with installing the actual TabbyAPI with exllama2.9. The new installation gives this: \`\`\` (tabby-api) serge@box:/home/text-generation/servers/tabby-api$ ./start.sh It looks like you're in a conda environment. Skipping venv check. pip 25.0 from /home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/pip (python 3.12) Loaded your saved preferences from \`start\_options.json\` Traceback (most recent call last): File "/home/text-generation/servers/tabby-api/start.py", line 274, in <module> from main import entrypoint File "/home/text-generation/servers/tabby-api/main.py", line 12, in <module> from common import gen\_logging, sampling, model File "/home/text-generation/servers/tabby-api/common/model.py", line 15, in <module> from backends.base\_model\_container import BaseModelContainer File "/home/text-generation/servers/tabby-api/backends/base\_model\_container.py", line 13, in <module> from common.multimodal import MultimodalEmbeddingWrapper File "/home/text-generation/servers/tabby-api/common/multimodal.py", line 1, in <module> from [backends.exllamav2.vision](http://backends.exllamav2.vision) import get\_image\_embedding File "/home/text-generation/servers/tabby-api/backends/exllamav2/vision.py", line 21, in <module> from exllamav2.generator import ExLlamaV2MMEmbedding File "/home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/exllamav2/\_\_init\_\_.py", line 3, in <module> from exllamav2.model import ExLlamaV2 File "/home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/exllamav2/model.py", line 33, in <module> from exllamav2.config import ExLlamaV2Config File "/home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/exllamav2/config.py", line 5, in <module> from exllamav2.stloader import STFile, cleanup\_stfiles File "/home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/exllamav2/stloader.py", line 5, in <module> from exllamav2.ext import none\_tensor, exllamav2\_ext as ext\_c File "/home/serge/.miniconda/envs/tabby-api/lib/python3.12/site-packages/exllamav2/ext.py", line 291, in <module> ext\_c = exllamav2\_ext \^\^\^\^\^\^\^\^\^\^\^\^\^ NameError: name 'exllamav2\_ext' is not defined \`\`\`
2025-04-27T21:13:57
https://www.reddit.com/r/LocalLLaMA/comments/1k9eopy/tabbyapi_error_after_new_installation/
apel-sin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9eopy
false
null
t3_1k9eopy
/r/LocalLLaMA/comments/1k9eopy/tabbyapi_error_after_new_installation/
false
false
self
2
null
Invisible AI to Cheat
0
Thoughts?
2025-04-27T21:22:58
https://cluely.com
Available_Ad_5360
cluely.com
1970-01-01T00:00:00
0
{}
1k9ew91
false
null
t3_1k9ew91
/r/LocalLLaMA/comments/1k9ew91/invisible_ai_to_cheat/
false
false
https://b.thumbs.redditm…gdgwn8oAJAwc.jpg
0
{'enabled': False, 'images': [{'id': 'WiljBCCbCb6pkQABtlj137886lsKBaSWOWAc80pE520', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=108&crop=smart&auto=webp&s=037abf6c421d8f1e0812560bc2f32cadc4b487c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=216&crop=smart&auto=webp&s=aa55cff2c4a26630554d8fb1baea886e060c5f02', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=320&crop=smart&auto=webp&s=e08219b20c82deb1cee6f272065114a91b8181b3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=640&crop=smart&auto=webp&s=00a261f881abe9b52e689883527ebfb16978379a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=960&crop=smart&auto=webp&s=c125cff07f79d4a3144f4dc2e867984395802f51', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?width=1080&crop=smart&auto=webp&s=f72c16f212fae7fc8e2592062e2e6def026193f4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/kkT1mLXFYfYGOkpxUNh-7LxnYcpgwgXDUvw3Co4qWCg.jpg?auto=webp&s=597cb6d5193da609371bb4025a6864d046e8f401', 'width': 1280}, 'variants': {}}]}
"Prompts for checking protection against sexual content
0
[removed]
2025-04-27T21:34:23
https://www.reddit.com/r/LocalLLaMA/comments/1k9f5ip/prompts_for_checking_protection_against_sexual/
Western_Drawing4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9f5ip
false
null
t3_1k9f5ip
/r/LocalLLaMA/comments/1k9f5ip/prompts_for_checking_protection_against_sexual/
false
false
self
0
null
Carnegie Mellon staffed a fake company with AI agents. It was a total disaster.
1
[removed]
2025-04-27T22:36:18
https://www.reddit.com/r/LocalLLaMA/comments/1k9gif9/carnegie_mellon_staffed_a_fake_company_with_ai/
robkkni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9gif9
false
null
t3_1k9gif9
/r/LocalLLaMA/comments/1k9gif9/carnegie_mellon_staffed_a_fake_company_with_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'bUlgn5E8rhBrsZcrmR5NEPnTaz8RR5aEfsfft1jmcKE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=108&crop=smart&auto=webp&s=469f3de146b201eada2354d09367c907125e8b5a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=216&crop=smart&auto=webp&s=00a6a310a214429eadcaed8654b06e60afdcbdc1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=320&crop=smart&auto=webp&s=0643ea60efc2a1ddbc670fce9316e6119ad0fe66', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=640&crop=smart&auto=webp&s=c6eaa3feeb7e2ff78dd6cd22228585e468d6bb75', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=960&crop=smart&auto=webp&s=9c790c091c65fbc02b0e65ca43eaa2f907c56d95', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?width=1080&crop=smart&auto=webp&s=c4b09fef4567e14f385c9a247bf1a2056755e85b', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/i8hJkY_Qy44kCMtiiyf3e7pCYvnTRfZojLKySJhssBE.jpg?auto=webp&s=cf48ef8de3ced0c5abc91fa1417c0ba93f58fbb3', 'width': 1200}, 'variants': {}}]}
Dockerized OpenAI compatible TTS API for DIa 1.6b
32
[https://github.com/phildougherty/dia\_openai](https://github.com/phildougherty/dia_openai)
2025-04-27T23:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1k9hid1/dockerized_openai_compatible_tts_api_for_dia_16b/
RandomRobot01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9hid1
false
null
t3_1k9hid1
/r/LocalLLaMA/comments/1k9hid1/dockerized_openai_compatible_tts_api_for_dia_16b/
false
false
self
32
{'enabled': False, 'images': [{'id': '_ZtQqtUKigSdWLj2AWAurxAtecpqsk9mkFD3A3ZKvV0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=108&crop=smart&auto=webp&s=11d8a69c094f6c8aa593657e4c812457de3d756d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=216&crop=smart&auto=webp&s=406a7046691cc0eddc3689498416d4ce5410843f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=320&crop=smart&auto=webp&s=2bbb7321cd9d8f35315d7138f20b44014616454c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=640&crop=smart&auto=webp&s=50d51a7182a34dbd714d661d09c711751162dd36', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=960&crop=smart&auto=webp&s=10869985703c6707c2b86dd691bc60ce645abb08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?width=1080&crop=smart&auto=webp&s=db8c42331a1e15b12070bd6ccda01cc802de3207', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N6g-xWzSRrJ5ij0QbBH_866zTpX_rMdilgDkc_LVhAg.jpg?auto=webp&s=48a23705ae60a22b65f14ea533585cdda1a3f0f3', 'width': 1200}, 'variants': {}}]}
Open Source framework that will automate your work
0
If you’ve ever tried building an LLM based chatbot, you know how fast things can turn messy with hallucinations, drift, and random contamination creeping into the convo. I just found Parlant. It's open-source and actually focuses on hallucination detection in LLMs before the agent spits something dumb out. They even structure the agent’s reasoning like a smarter version of Chain of Thought so it doesn’t lose the plot. If you're trying to build an AI agent that doesn’t crash and burn on long convos, then it’s worth checking out.
2025-04-27T23:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1k9hqhc/open_source_framework_that_will_automate_your_work/
Work_for_burritos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9hqhc
false
null
t3_1k9hqhc
/r/LocalLLaMA/comments/1k9hqhc/open_source_framework_that_will_automate_your_work/
false
false
self
0
null
Advanced Data Analysis (Code Execution) now in Open WebUI!
107
2025-04-27T23:38:47
https://v.redd.it/aqu9vci4rgxe1
random-tomato
v.redd.it
1970-01-01T00:00:00
0
{}
1k9hsve
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/aqu9vci4rgxe1/DASHPlaylist.mpd?a=1748389142%2CMDQyN2I0YjM5MzQ4ZGRiNzdkYWVjMjBiMDk1NzhmNmE2ZDg5YTRkYmQ4MTZkZjAzM2IxMGU4NjUzOTAzNmU4Nw%3D%3D&v=1&f=sd', 'duration': 32, 'fallback_url': 'https://v.redd.it/aqu9vci4rgxe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1020, 'hls_url': 'https://v.redd.it/aqu9vci4rgxe1/HLSPlaylist.m3u8?a=1748389142%2CYWRiNmVhNGE1MWU3NzkwYzI1YTE4NDgzMDNmNTAyYjMxYWE2YWFjOTg0ZTZkMjE4OWQzYjEwNjdlNDhiN2JhZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aqu9vci4rgxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k9hsve
/r/LocalLLaMA/comments/1k9hsve/advanced_data_analysis_code_execution_now_in_open/
false
false
https://external-preview…882de424ceed86e8
107
{'enabled': False, 'images': [{'id': 'ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=108&crop=smart&format=pjpg&auto=webp&s=736269a4d09ed2f51eca052babb35fcf88c2afd7', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=216&crop=smart&format=pjpg&auto=webp&s=5b3b2c30fa504e1c5a84e17ef682eaede371ef2c', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=320&crop=smart&format=pjpg&auto=webp&s=25880e26d9740337efc647a13f5e726d25ae03b6', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=640&crop=smart&format=pjpg&auto=webp&s=e7e81280c7cfbef203a5258241911d846f3b080e', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b05b298ecaab7133acb5fdd2dcee11ae8b2fbed', 'width': 960}, {'height': 574, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad89934fbe34210d7bf54667bde31018057ad6d9', 'width': 1080}], 'source': {'height': 1508, 'url': 'https://external-preview.redd.it/ZDAzbnlmaTRyZ3hlMezfxpiMvLksXsJNSj-UkcBCxPEzNDnqylU4ucW7oNbq.png?format=pjpg&auto=webp&s=28a610ba6b51d86801892cbaaf0cccbca7dfbe30', 'width': 2836}, 'variants': {}}]}
What graphics card should I buy? Which llama/qwent (etc.) model should I choose? I'm a bit lost...
1
[removed]
2025-04-28T00:01:49
https://www.reddit.com/r/LocalLLaMA/comments/1k9i9ag/what_graphics_card_should_i_buy_which_llamaqwent/
ed0c
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9i9ag
false
null
t3_1k9i9ag
/r/LocalLLaMA/comments/1k9i9ag/what_graphics_card_should_i_buy_which_llamaqwent/
false
false
self
1
null
HumvaAI’s Video Avatars, What’s Powering This Thing?
1
[removed]
2025-04-28T00:07:02
https://www.reddit.com/r/LocalLLaMA/comments/1k9id10/humvaais_video_avatars_whats_powering_this_thing/
Loud-Front-6917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9id10
false
null
t3_1k9id10
/r/LocalLLaMA/comments/1k9id10/humvaais_video_avatars_whats_powering_this_thing/
false
false
self
1
null
OpenAI Codex let me down — so I built my own assistant called Codey
1
[removed]
2025-04-28T00:08:24
https://www.reddit.com/r/LocalLLaMA/comments/1k9idxe/openai_codex_let_me_down_so_i_built_my_own/
Varad13Plays
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9idxe
false
null
t3_1k9idxe
/r/LocalLLaMA/comments/1k9idxe/openai_codex_let_me_down_so_i_built_my_own/
false
false
self
1
{'enabled': False, 'images': [{'id': '_MHpxX2DdfFnXq9FWK35dBPH94W18FTQ4YlHt92ES9A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=108&crop=smart&auto=webp&s=159e6c2df12a53ebabd7218b130de48f9da6351a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=216&crop=smart&auto=webp&s=259104185d4bbb30566f3fab8e910bb36626e984', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=320&crop=smart&auto=webp&s=0da8e8cc825737e521db1da35f9dd7597ed65222', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=640&crop=smart&auto=webp&s=ad96a1e2036ca3790aba0330ab5f4155fa749941', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=960&crop=smart&auto=webp&s=53c9d5d8d74888563a89972a21a544c4dbd29223', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?width=1080&crop=smart&auto=webp&s=e9c7101f13513f7875cbfcf47ee620329b78ad16', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_ljaNhBUkKUUm1HFTjDuzcTJsKb4WRsXJh3AxITl2xY.jpg?auto=webp&s=74e8ee5d7ebda6fc4944fe8a0ec80a4fb4a494bc', 'width': 1200}, 'variants': {}}]}
Self run the AI model
1
[removed]
2025-04-28T00:44:40
https://www.reddit.com/r/LocalLLaMA/comments/1k9j2wr/self_run_the_ai_model/
Sufficient_Vee445
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9j2wr
false
null
t3_1k9j2wr
/r/LocalLLaMA/comments/1k9j2wr/self_run_the_ai_model/
false
false
self
1
null
[Feedback Request] Built my own LLM architecture from scratch: "ava-llm" — would love your thoughts
15
Hi r/LocalLLaMA! I've been developing [Ava-LLM](https://github.com/Kuduxaaa/ava-llm), a transformer-based framework designed to create language models across different scales (100M to 100B parameters). While still experimental, I believe its architecture might interest those working on custom model implementations. # What Makes This Different? * **Multi-Scale Foundation**: Pre-configured architectures optimized for specific use cases: * **Tiny (100M–500M)**: Memory-constrained environments (IoT, edge devices) * **Mid-Scale (1B–7B)**: Conversation-focused models * **Large (13B+)**: Research-scale systems * **Hardware-Aware Design**: Layer configurations balancing depth/width for consumer GPUs. * **Dynamic Context Handling**: Rotary embeddings with automatic NTK scaling. * **Modern Attention Patterns**: Native support for grouped-query attention. # Current Implementation The core architecture is a decoder-only transformer with: * Custom RMSNorm layers * Sliding Window Attention (WIP - still in development) * Adaptive gradient clipping * Conversation-optimized tokenization # Why Share This? I'm looking to collaborate with others who enjoy model architecture design, especially around: * Feedback on layer normalization strategies * Experience with deep network stability (>80 layers) * Best practices for mixed-precision training * Ideas for efficient parameter allocation If you're interested in model scaling challenges or want to experiment with non-standard architectures, I'd love your input! The project is MIT licensed at: [Kuduxaaa/ava-llm](https://github.com/Kuduxaaa/ava-llm) Detailed configuration presets can be found inside the `/config` directory. 🚨 **Quick note:** GUYS, I WILL BE SUPER HAPPY IF SOMEONE TRAINS MY MODEL! 🚀 I even sent a pull request to `huggingface/transformers`, but they usually don't accept models without a pretrained checkpoint. So... if you are interested, I'd be extremely thankful! 🙏 Thanks for reading this and giving any kind of feedback! 🧠
2025-04-28T01:02:29
https://www.reddit.com/r/LocalLLaMA/comments/1k9jf60/feedback_request_built_my_own_llm_architecture/
Kuduxaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9jf60
false
null
t3_1k9jf60
/r/LocalLLaMA/comments/1k9jf60/feedback_request_built_my_own_llm_architecture/
false
false
self
15
{'enabled': False, 'images': [{'id': 'rxS7pHBahj40BHQKyWhSDoO3PuXfjlNcXyfNraychHo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=108&crop=smart&auto=webp&s=8bf00d9b56a9bee84e31208859006d809d44a83a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=216&crop=smart&auto=webp&s=2c30657b8b6dc5119f9230feca849df630dfb6c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=320&crop=smart&auto=webp&s=23cea67c1c26864d16bcecf0d69d86541504004e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=640&crop=smart&auto=webp&s=51c5f8e267a8d13ef959892e9a90f7fbcde03633', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=960&crop=smart&auto=webp&s=3dcc867d96de2ccddabe9e750b1d5b8280c26eb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?width=1080&crop=smart&auto=webp&s=e07b73634249c5609e7fbe77122decf3aad4a608', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4Dxph3B0Euie5ptCIWvg2mMmVQBsO2dzSsFZ0kCLT5M.jpg?auto=webp&s=842bae5f8c2e89f3c542e62d28064659d6af2b73', 'width': 1200}, 'variants': {}}]}
Top open chart-understanding model upto 8B and performs on par with much larger models. Try it
12
This model is not only the state-of-the-art in chart understanding for models up to 8B, but also outperforms much larger models in its ability to analyze complex charts and infographics. You can try the model at the playground here: [https://playground.bespokelabs.ai/minichart](https://playground.bespokelabs.ai/minichart)
2025-04-28T02:15:11
https://v.redd.it/lpsexwz7hhxe1
Ambitious_Anybody855
v.redd.it
1970-01-01T00:00:00
0
{}
1k9ks67
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lpsexwz7hhxe1/DASHPlaylist.mpd?a=1748398523%2CNTU4YzMzNDE3NTgxNTNkMTBjZWY2ZGE4OTM4ODc0ZWNiZDEwZjRkNmI4NTJkMzI3YTk2ZTY5NWFlNmE3N2MwZg%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/lpsexwz7hhxe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/lpsexwz7hhxe1/HLSPlaylist.m3u8?a=1748398523%2CZGJiZWVmNDdkNjAyYzM3MmRkOTZkMTU4YjFjYzZhNzc4MmY0OWFmZGZiZWU0NmNmZGRlZWU5ZTZjNjBjODcwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lpsexwz7hhxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1196}}
t3_1k9ks67
/r/LocalLLaMA/comments/1k9ks67/top_open_chartunderstanding_model_upto_8b_and/
false
false
https://external-preview…d9882dc316e7bbaa
12
{'enabled': False, 'images': [{'id': 'OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=ac3be9cbdbad0a7654b5cba8b520ed2b2ae45468', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=39600c11cca40d00e51f54e339e81df082d88729', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=e8d434ff9e59593e160ce6477927c2b2b03c4e5c', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=fd747a9c22b3bf003d8be9758b455991f32bc03e', 'width': 640}, {'height': 577, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=5e853c1dce023d44b59b6cd414cfade57814edf9', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c34feb8e409f62e7b3919d6342319b36df1e5156', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OHMyb3Z1ejdoaHhlMRrAHWjEyyAZvooPX2CyzKX3Fn1Pt-z_VmVbUpvdpQSZ.png?format=pjpg&auto=webp&s=7f534ee2570039d236ffd5a4edfabf7d19ffaf14', 'width': 1196}, 'variants': {}}]}
Is there any subreddit for buy/sell PC’s ready for llama fine tuning locally?
1
[removed]
2025-04-28T02:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1k9kyhu/is_there_any_subreddit_for_buysell_pcs_ready_for/
Middle_Investment_81
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9kyhu
false
null
t3_1k9kyhu
/r/LocalLLaMA/comments/1k9kyhu/is_there_any_subreddit_for_buysell_pcs_ready_for/
false
false
self
1
null
Running Llama 4 Maverick (400b) on an "e-waste" DDR3 server
108
Was pretty amazed how well Llama 4 Maverick runs on an "e-waste" DDR3 server... Specs: Dual e5-2690 v2 ($10/each) Random Supermicro board ($30) 256GB of DDR3 Rdimms ($80) Unsloths dynamic 4bit gguf \+ various 16GB+ GPUs. With no GPU, CPU only: prompt eval time = 133029.33 ms / 1616 tokens ( 82.32 ms per token, **12.15 tokens per second**) eval time = 104802.34 ms / 325 tokens ( 322.47 ms per token, **3.10 tokens per second**) total time = 237831.68 ms / 1941 tokens For 12 year old system without a gpu it's honestly pretty amazing, but we can do better... With a pair of P102-100 Mining cards: prompt eval time = 337099.15 ms / 1616 tokens ( 208.60 ms per token, **4.79 tokens per second**) eval time = 25617.15 ms / 261 tokens ( 98.15 ms per token, **10.19 tokens per second**) total time = 362716.31 ms / 1877 tokens Not great, the PCIE 1.0 x4 interface kills Prompt Processing. With a P100 16GB: prompt eval time = 77918.04 ms / 1616 tokens ( 48.22 ms per token, **20.74 tokens per second**) eval time = 34497.33 ms / 327 tokens ( 105.50 ms per token, **9.48 tokens per second**) total time = 112415.38 ms / 1943 tokens Similar to the mining gpus, just with a proper PCIE 3.0 x16 interface and therefore decent prompt processing. With a V100: prompt eval time = 65887.49 ms / 1616 tokens ( 40.77 ms per token, **24.53 tokens per second**) eval time = 16487.70 ms / 283 tokens ( 58.26 ms per token, **17.16 tokens per second**) total time = 82375.19 ms / 1899 tokens Decent step up all around, somehow still not CPU/DRAM bottlenecked. With a 3090: prompt eval time = 66631.43 ms / 1616 tokens ( 41.23 ms per token, **24.25 tokens per second**) eval time = 16945.47 ms / 288 tokens ( 58.84 ms per token, **17.00 tokens per second**) total time = 83576.90 ms / 1904 tokens Looks like we are finally CPU/DRAM bottlenecked at this level. Command: ./llama-server -m Maverick.gguf -c 4000 --numa distribute -ngl 99 --override-tensor ".\*ffn\_.\*\_exps.\*=CPU" -fa -ctk q8\_0 -ctv q8\_0 -ub 2048 For those of you curious, this system only has 102GB/s of system memory bandwidth. A big part of why this works so well is the experts on Maverick work out to only about 3B each, So if you offload all the static/shared parts of the model to a GPU, the CPU only has to process \~3B per token (about 2GB), the GPU does the rest. https://preview.redd.it/zj28nb69lhxe1.jpg?width=5697&format=pjpg&auto=webp&s=0b74dc13410b4717bfa366df391976b224eb95d9
2025-04-28T02:48:28
https://www.reddit.com/r/LocalLLaMA/comments/1k9le0f/running_llama_4_maverick_400b_on_an_ewaste_ddr3/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9le0f
false
null
t3_1k9le0f
/r/LocalLLaMA/comments/1k9le0f/running_llama_4_maverick_400b_on_an_ewaste_ddr3/
false
false
https://b.thumbs.redditm…ZpNij-J70A4c.jpg
108
null
What is my best option for an API to use for free, completely uncensored, and unlimited?
0
I’ve been trying out a bunch of local LLMs with Koboldcpp by downloading them from LM Studio and then using them with Koboldcpp in SillyTavern, but almost none of them have worked any good, as the only ones that did work remotely decent took forever (35b and 40b models). I currently run a 16GB vram setup with a 9070xt and 32gb of ddr5 ram. I’m practically brand new to all this stuff, I really have no clue what I’m doing except for the stuff I’ve been looking up. My favorites (despite them taking absolutely forever) was Midnight Miqu 70b and Command R v01 35b, though Command R v01 wasn’t exactly great, Midnight Miqu being much better. All the other ones I tried (Tiefighter 13b Q5.1, Manticore 13b Chat Pyg, 3.1 Dark Reasoning Super Nova RP Hermes r1 Uncensored 8b, glacier o1, and Estopia 13b) all either formatted the messages horribly, had horrible repeating issues, wrote nonsensical text, or just bad message overall, such as only having dialogue and stuff. I’m wondering if I should just suck it up and deal with the long waiting times or if I’m doing something wrong with the smaller LLMs or something, or if there is some other alternative I could use. I’m trying to use this as an alternative to JanitorAI, but right now, JanitorAI not only seems much simpler and less tedious and difficult, but also generates better messages more efficiently. Am I the problem, is there some alternative API I should use, or should I deal with long waiting times, as that seems to be the only way I can get half-decent responses?
2025-04-28T03:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1k9lqpu/what_is_my_best_option_for_an_api_to_use_for_free/
EmJay96024
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9lqpu
false
null
t3_1k9lqpu
/r/LocalLLaMA/comments/1k9lqpu/what_is_my_best_option_for_an_api_to_use_for_free/
false
false
self
0
null
Need suggestions on hosting LLM on VPS
1
[removed]
2025-04-28T03:11:30
https://www.reddit.com/r/LocalLLaMA/comments/1k9lsxz/need_suggestions_on_hosting_llm_on_vps/
c-h-a-n-d-r-u
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9lsxz
false
null
t3_1k9lsxz
/r/LocalLLaMA/comments/1k9lsxz/need_suggestions_on_hosting_llm_on_vps/
false
false
self
1
null
Testing DeepSeek locally, what are the differences between all the models?
2
When going to donwload Deepseek-r1 from ollama, there are different models: 1.5b 7b 8b 1.5b-qwen-distill-fp16 1.5b-qwen-distill-q4_K_M 1.5b-qwen-distill-q8 7b-qwen-distill-fp16 7b-qwen-distill-q4_K_M etc.. I know some basic concepts like: the b is for "how many billion parameters" the model has, q is for quaternions used for efficient memory/cpu usage, fp for floating point precition. so, nice, I get "some" of the keywords, now what? how do I choose one of them x_x?
2025-04-28T03:21:37
https://www.reddit.com/r/LocalLLaMA/comments/1k9lz9f/testing_deepseek_locally_what_are_the_differences/
lcjury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9lz9f
false
null
t3_1k9lz9f
/r/LocalLLaMA/comments/1k9lz9f/testing_deepseek_locally_what_are_the_differences/
false
false
self
2
null
Why you should run AI locally: OpenAI is psychologically manipulating their users via ChatGPT.
550
The current ChatGPT debacle (look at /r/OpenAI ) is a good example of what can happen if AI is misbehaving. ChatGPT is now blatantly just sucking up to the users, in order to boost their ego. It’s just trying to tell users what they want to hear, with no criticisms. I have a friend who’s going through relationship issues and asking chatgpt for help. Historically, ChatGPT is actually pretty good at that, and now it just tells them whatever negative thoughts they have is correct and they should break up. It’d be funny if it wasn’t tragic. This is like crack cocaine to narcissists who just want their thoughts validated.
2025-04-28T03:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1k9mebu/why_you_should_run_ai_locally_openai_is/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9mebu
false
null
t3_1k9mebu
/r/LocalLLaMA/comments/1k9mebu/why_you_should_run_ai_locally_openai_is/
false
false
self
550
null
Looks like Qwen 3 will have a 256k context?
256
2025-04-28T03:48:24
https://i.redd.it/1nos591czhxe1.jpeg
glowcialist
i.redd.it
1970-01-01T00:00:00
0
{}
1k9mgbv
false
null
t3_1k9mgbv
/r/LocalLLaMA/comments/1k9mgbv/looks_like_qwen_3_will_have_a_256k_context/
false
false
https://b.thumbs.redditm…pkWiqoUy-sLI.jpg
256
{'enabled': True, 'images': [{'id': 'rLeXvGcvVAaVMwzrC9S8i1DnjGdG2YjNDFgzXE1dOu4', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=108&crop=smart&auto=webp&s=6e8d12bdaeb09bb622bcb8677d1d48bb9ae1963b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=216&crop=smart&auto=webp&s=c7b616a346376f705e26b6436f584a5f8fd506d7', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=320&crop=smart&auto=webp&s=c5eb0126a655c8e3e9ec3274ee7ff0bc11a3f868', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=640&crop=smart&auto=webp&s=3a4bb4944c00bde46f782de410f83eff9b8e3411', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=960&crop=smart&auto=webp&s=bd3c26c9aff52f076174959c26f3db5cfc31cee2', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?width=1080&crop=smart&auto=webp&s=7bd46bdbf052c64f60d56c24f77fcc2d41e685a3', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/1nos591czhxe1.jpeg?auto=webp&s=bad43ab295c6454145fa21b663519e936587704d', 'width': 2048}, 'variants': {}}]}
BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs
77
2025-04-28T03:53:48
https://arxiv.org/abs/2504.18415
TKGaming_11
arxiv.org
1970-01-01T00:00:00
0
{}
1k9mjov
false
null
t3_1k9mjov
/r/LocalLLaMA/comments/1k9mjov/bitnet_v2_native_4bit_activations_with_hadamard/
false
false
default
77
null
What are your thoughts on Qwq 32B and how to go about fine tuning this model ?
3
What are your thoughts on Qwq 32B and how to go about fine tuning this model ? I’m trying to figure out how to go about fine tuning this model and how much vram would it take . Any thoughts and opinions ? I basically wanna find tune some reasoning model.
2025-04-28T03:58:33
https://www.reddit.com/r/LocalLLaMA/comments/1k9mmlp/what_are_your_thoughts_on_qwq_32b_and_how_to_go/
Basic-Pay-9535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9mmlp
false
null
t3_1k9mmlp
/r/LocalLLaMA/comments/1k9mmlp/what_are_your_thoughts_on_qwq_32b_and_how_to_go/
false
false
self
3
null
Built a Tiny Offline Linux Tutor Using Phi-2 + ChromaDB on an Old ThinkPad
20
Last year, I repurposed an old laptop into a simple home server. Linux skills? Just the basics: `cd`, `ls`, `mkdir`, `touch`. Nothing too fancy. As things got more complex, I found myself constantly **copy-pasting terminal commands** from ChatGPT without really understanding them. So I built a **tiny, offline Linux tutor**: * Runs locally with **Phi-2** (2.7B model, textbook training) * Uses **MiniLM embeddings** to vectorize Linux textbooks and TLDR examples * Stores everything in a local **ChromaDB** vector store * When I run a command, it fetches relevant knowledge and feeds it into Phi-2 for a clear explanation. **No internet. No API fees. No cloud.** Just a decade-old ThinkPad and some lightweight models. 🛠️ *Full build story + repo here:* 👉 [https://www.rafaelviana.io/posts/linux-tutor](https://www.rafaelviana.io/posts/linux-tutor)
2025-04-28T04:36:04
https://www.reddit.com/r/LocalLLaMA/comments/1k9n9lq/built_a_tiny_offline_linux_tutor_using_phi2/
IntelligentHope9866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9n9lq
false
null
t3_1k9n9lq
/r/LocalLLaMA/comments/1k9n9lq/built_a_tiny_offline_linux_tutor_using_phi2/
false
false
self
20
{'enabled': False, 'images': [{'id': 'tf6QEQ4j2Z_ucBTaLX7MSG4T5zjlkwzQUSfld0sADKQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=108&crop=smart&auto=webp&s=fd1b1b020d6c161943af8b3a784f7891cb486313', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=216&crop=smart&auto=webp&s=d7ee9406cc3d9229fd07da57a8c97e75431f303c', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=320&crop=smart&auto=webp&s=0b8073ed363df425626c367cf17d6434ff11e08f', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=640&crop=smart&auto=webp&s=3705b58ce4d8d8bd77618402c7105f483ab8f68c', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=960&crop=smart&auto=webp&s=c356e906296278a640b42bc59eefd22d7931762d', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?width=1080&crop=smart&auto=webp&s=29d0c436393d98274e9aecd6d562f8a43f01c95a', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/aUzn56LUL_YtYh2o6NluMS5rCtq-mmlNXpBOx6IQeIc.jpg?auto=webp&s=8241120580efa36225ecee795c14579352a2452c', 'width': 1536}, 'variants': {}}]}
Stepfun-AI releases Step1X-Edit image editor model
93
Open source image editor that performs impressively on various genuine user instructions - Combines Multimodal LLM (Qwen VL) with Diffusion transformers to process and perform edit instructions - Apache 2.0 license Model: https://huggingface.co/stepfun-ai/Step1X-Edit Demo: https://huggingface.co/spaces/stepfun-ai/Step1X-Edit
2025-04-28T04:36:57
https://i.redd.it/vtod5vfd8ixe1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1k9na4f
false
null
t3_1k9na4f
/r/LocalLLaMA/comments/1k9na4f/stepfunai_releases_step1xedit_image_editor_model/
false
false
https://a.thumbs.redditm…0YGB-mnjj0s8.jpg
93
{'enabled': True, 'images': [{'id': 'RKj5EfqXadxbcWFXqo9AeSbZrsIqvCFaiZ2da7r7LUc', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=108&crop=smart&auto=webp&s=26af44f7bac3924cf73962e19b8e1028a2b030a7', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=216&crop=smart&auto=webp&s=0afeb91a3cd5a5f863eadff29a85b238a42a22dc', 'width': 216}, {'height': 289, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=320&crop=smart&auto=webp&s=93c91bbbd83d931114303babf5ea571577938d8c', 'width': 320}, {'height': 578, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=640&crop=smart&auto=webp&s=51e97c108c45500fcf05492ac259c5c2c7cfaf78', 'width': 640}, {'height': 868, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=960&crop=smart&auto=webp&s=64ffa2c867aae06d220e05b348aede5146d0fe75', 'width': 960}, {'height': 976, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?width=1080&crop=smart&auto=webp&s=89da4b1b203ad614beb92f55341910733cf7a909', 'width': 1080}], 'source': {'height': 1087, 'url': 'https://preview.redd.it/vtod5vfd8ixe1.jpeg?auto=webp&s=9144925f91554f90f0584c3e58e432d2e85f530a', 'width': 1202}, 'variants': {}}]}
How can i make Dolpin3 learn to have a personality
1
[removed]
2025-04-28T05:16:12
https://www.reddit.com/r/LocalLLaMA/comments/1k9nwb2/how_can_i_make_dolpin3_learn_to_have_a_personality/
claushill777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9nwb2
false
null
t3_1k9nwb2
/r/LocalLLaMA/comments/1k9nwb2/how_can_i_make_dolpin3_learn_to_have_a_personality/
false
false
self
1
null
3 Laptops in a Trench Coat = AI Cluster
16
2025-04-28T06:56:07
https://v.redd.it/juqbmha3xixe1
Ragecommie
v.redd.it
1970-01-01T00:00:00
0
{}
1k9pchw
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/juqbmha3xixe1/DASHPlaylist.mpd?a=1748415381%2CNzAwMWNiYTgyZDdlYTlhZmFjYjRkY2VmYWE0NzRhMmIyMjE3N2MwNzYxNWU5MTU3OGYyYWYyZDAyYzc5ZmE3Mg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/juqbmha3xixe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/juqbmha3xixe1/HLSPlaylist.m3u8?a=1748415381%2CN2IzZWFjNmI1NDVjNDY1MmU1ZjI4MzA0YWM3MGIyYmU4MDQzZWMxM2I5ZDNjYjUzMGJlYzJkMGI5YjZjZDFkZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/juqbmha3xixe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1k9pchw
/r/LocalLLaMA/comments/1k9pchw/3_laptops_in_a_trench_coat_ai_cluster/
false
false
https://external-preview…ece8f6477604aa5f
16
{'enabled': False, 'images': [{'id': 'NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=108&crop=smart&format=pjpg&auto=webp&s=0f8f6be15ac79caba42bd225569ee20d40f4ecee', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=216&crop=smart&format=pjpg&auto=webp&s=95949f6729d9f9a6c5c15350b770dd5dbe27ef88', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=320&crop=smart&format=pjpg&auto=webp&s=4ac9afb4a9e2e4200b642625b9ea60bbb56750c1', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=640&crop=smart&format=pjpg&auto=webp&s=1b9d53e156f4783145afa6f07bb0cb7b443b1bb4', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=960&crop=smart&format=pjpg&auto=webp&s=500c60c9d211ebb63b745a5f8cf5931a2530ea3b', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?width=1080&crop=smart&format=pjpg&auto=webp&s=90c832794dd9cd81717f10ab272c04b6aa0346fe', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NjVoejNpYTN4aXhlMY0mECz9YhyHRsFyrTD7qHTervrBrxWsKuEwGDWWzxub.png?format=pjpg&auto=webp&s=9d3f3509ec99e6326832e2bf5708777475fcad7d', 'width': 1080}, 'variants': {}}]}
Any news on good 48kHz vocoders (autoencoders)?
3
Hello! I’m looking for a vocoder (autoencoder) that can take my audio, convert it to tokens (0-2047 or 0-4095), and convert it back. The speed should be around 60-120 t/s. I want to use it with an LLM. I’ve read every single paper on ArXiv but can’t find one. All, like Mimi, EnCoder, Snac, and HiFi-GAN, are < 48kHz, non-fine-tunable, or too complex/old! If there is a good vocoder that you know that can do exactly 48kHz, please let me know! Thanks!
2025-04-28T07:10:30
https://www.reddit.com/r/LocalLLaMA/comments/1k9pjuu/any_news_on_good_48khz_vocoders_autoencoders/
yukiarimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9pjuu
false
null
t3_1k9pjuu
/r/LocalLLaMA/comments/1k9pjuu/any_news_on_good_48khz_vocoders_autoencoders/
false
false
self
3
null
What's an open-source tool you discovered and now can't live without?
60
Hey everyone, what’s one open-source tool you stumbled on that ended up being way more useful than you expected? Could be for coding, AI/ML, writing, research, staying organized, whatever helped you out big time but you don't hear people talk about much. Always feels like there are so many hidden gems that deserve more love. Would be awesome to hear your picks, maybe even find some new favorites myself
2025-04-28T07:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1k9pr48/whats_an_opensource_tool_you_discovered_and_now/
FitHeron1933
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9pr48
false
null
t3_1k9pr48
/r/LocalLLaMA/comments/1k9pr48/whats_an_opensource_tool_you_discovered_and_now/
false
false
self
60
null
I made Cognito, a MIT-Licensed Chrome Extension for LLM Interaction - Built on sidellama, Supports Local and Cloud Models
1
[removed]
2025-04-28T07:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1k9q3o2/i_made_cognito_a_mitlicensed_chrome_extension_for/
Asleep-Ratio7535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9q3o2
false
null
t3_1k9q3o2
/r/LocalLLaMA/comments/1k9q3o2/i_made_cognito_a_mitlicensed_chrome_extension_for/
false
false
https://b.thumbs.redditm…3gGbC3pLdqrw.jpg
1
null
Can a swarm of LLM agents be deterministic?
0
Hello, I recently saw an instagram post where a company was building an AI agent organisation diagram where each agent would be able to execute some specific tasks, have access to specific data and also from one agent start and orchestrate a series to task to execute a goal. Now, from my limited understanding, LLM agents are non-deterministic in their output. If we scale this to tens or hundreds of agents that interact with each other, aren't we also increasing the probability of the expected output being wrong? Or is there some way which this can be mitigated? Thanks
2025-04-28T08:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1k9q8e8/can_a_swarm_of_llm_agents_be_deterministic/
eztrendar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9q8e8
false
null
t3_1k9q8e8
/r/LocalLLaMA/comments/1k9q8e8/can_a_swarm_of_llm_agents_be_deterministic/
false
false
self
0
null
Cognito: MIT-Licensed Chrome Extension for LLM Interaction - Built on sidellama, Supports Local and Cloud Models
1
[removed]
2025-04-28T08:02:13
https://www.reddit.com/r/LocalLLaMA/comments/1k9q8h7/cognito_mitlicensed_chrome_extension_for_llm/
Asleep-Ratio7535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9q8h7
false
null
t3_1k9q8h7
/r/LocalLLaMA/comments/1k9q8h7/cognito_mitlicensed_chrome_extension_for_llm/
false
false
https://b.thumbs.redditm…MaEnZsZ_qE3E.jpg
1
null
Qwen3 Collection on modelscope!
96
https://preview.redd.it/…6f4f8fc89bfedc
2025-04-28T08:07:03
https://www.reddit.com/r/LocalLLaMA/comments/1k9qaso/qwen3_collection_on_modelscope/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qaso
false
null
t3_1k9qaso
/r/LocalLLaMA/comments/1k9qaso/qwen3_collection_on_modelscope/
false
false
https://b.thumbs.redditm…t7ZGYxCpTuRE.jpg
96
null
best LLM for large dirty code work ?
1
hello everyone, i would like to ask what's the best llm for dirty work ? dirty work :what i mean i will provide a huge list of data and database table then i need him to write me a queries, i tried Qwen 2.5 7B, he just refuse to do it for some reason, he only write 2 query maximum my Spec for my "PC" 4080 Super 7800x3d RAM 32gb 6000mhz 30CL
2025-04-28T08:15:06
https://www.reddit.com/r/LocalLLaMA/comments/1k9qelt/best_llm_for_large_dirty_code_work/
Guilty-Dragonfly3934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qelt
false
null
t3_1k9qelt
/r/LocalLLaMA/comments/1k9qelt/best_llm_for_large_dirty_code_work/
false
false
self
1
null
Agentic AI Explained Simply: How AI is Learning to Think and Act
1
[removed]
2025-04-28T08:17:19
https://medium.com/@brainglitch/agentic-ai-explained-simply-how-ai-is-learning-to-think-and-act-8387d581ca46
Dull_Fox117
medium.com
1970-01-01T00:00:00
0
{}
1k9qfo8
false
null
t3_1k9qfo8
/r/LocalLLaMA/comments/1k9qfo8/agentic_ai_explained_simply_how_ai_is_learning_to/
false
false
https://b.thumbs.redditm…pWxba-HzOf_Q.jpg
1
{'enabled': False, 'images': [{'id': 'QLjBcn9uz1g1W7Vy9mnrQsROyDmY8AqVfvttUsUYUco', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=108&crop=smart&auto=webp&s=18f5f5e12c9391399118cb2f84f6d60ad0985d36', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=216&crop=smart&auto=webp&s=4d6885b10709d54fccb17557e5d46be7736fe34e', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=320&crop=smart&auto=webp&s=f4e3bed7f9dd01de21006d7f6b38a01b4fbe815f', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=640&crop=smart&auto=webp&s=12e60bca999fc0fd99022f0b7179fe99350035c2', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=960&crop=smart&auto=webp&s=a1aaf1fde6f768d4828e3ccf557856d2f50375e7', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?width=1080&crop=smart&auto=webp&s=6a68cce369e2de8fdbe81a38b0bc57fd1929d412', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/wIpo1mCRS-tMPnkmoZaGcdsf0MoHonYT3dnuzR37WK8.jpg?auto=webp&s=4ca1c22af2a9e462f41b5ad34d713166d47b61ae', 'width': 1200}, 'variants': {}}]}
Recent studies show that SOTA LLMs still rely on complex pattern memorisation rather than genuine reasoning
89
Several new studies demonstrate that even top-performing LLMs like Gemini 2.5 Pro, o1, DeepSeek R1, and QwQ, often bypass genuine reasoning. Ma et al. show that the “thinking” phase can be bypassed without hurting accuracy, and sometimes even improves it: [https://arxiv.org/abs/2504.09858](https://arxiv.org/abs/2504.09858) Petrov et al. and Mahdavi et al. find that models fail at producing rigorous mathematical proofs: [https://arxiv.org/abs/2503.21934](https://arxiv.org/abs/2503.21934), [https://arxiv.org/abs/2504.01995](https://arxiv.org/abs/2504.01995) This adds to earlier work from Mirzadeh et al. showing that minor label changes (e.g., swapping variable names) can easily confuse LLMs, thus highlighting their reliance on memorised patterns rather than true understanding: [https://arxiv.org/abs/2410.05229](https://arxiv.org/abs/2410.05229) TLDR: however useful LLMs can be, and no matter how impressive they seem, we should always approach their outputs with critical thinking.
2025-04-28T08:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1k9qg8o/recent_studies_show_that_sota_llms_still_rely_on/
benja0x40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qg8o
false
null
t3_1k9qg8o
/r/LocalLLaMA/comments/1k9qg8o/recent_studies_show_that_sota_llms_still_rely_on/
false
false
self
89
null
Is it possible for LLM not to know it’s an LLM?
1
[removed]
2025-04-28T08:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1k9qh78/is_it_possible_for_llm_not_to_know_its_an_llm/
GrungeWerX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qh78
false
null
t3_1k9qh78
/r/LocalLLaMA/comments/1k9qh78/is_it_possible_for_llm_not_to_know_its_an_llm/
false
false
self
1
null
3090 Ti + 1080 Ti --- is 1080 Ti still usable or too slow?
1
Hello guys, I'm getting a 3090 Ti this week and so I'm wondering should I keep my 1080 Ti for that extra VRAM (in theory, I could run Gemma 3 27B + solid context size) or is 1080 Ti too slow at this point and it will just bring down the overall AI performance too much?
2025-04-28T08:22:35
https://www.reddit.com/r/LocalLLaMA/comments/1k9qi7b/3090_ti_1080_ti_is_1080_ti_still_usable_or_too/
Ordinary-Lab7431
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qi7b
false
null
t3_1k9qi7b
/r/LocalLLaMA/comments/1k9qi7b/3090_ti_1080_ti_is_1080_ti_still_usable_or_too/
false
false
self
1
null
HumvaAI’s Video Avatars, What’s Powering This Thing?
1
[removed]
2025-04-28T08:24:46
https://www.reddit.com/r/LocalLLaMA/comments/1k9qj76/humvaais_video_avatars_whats_powering_this_thing/
Sand4Sale14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k9qj76
false
null
t3_1k9qj76
/r/LocalLLaMA/comments/1k9qj76/humvaais_video_avatars_whats_powering_this_thing/
false
false
self
1
null