title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Deepseek shows insane amounts of loyalty to CCP
| 1 | 2025-04-14T18:53:29 |
DoodieDooper
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz6r70
| false | null |
t3_1jz6r70
|
/r/LocalLLaMA/comments/1jz6r70/deepseek_shows_insane_amounts_of_loyalty_to_ccp/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'W1uhxkpo-3kqoqkskFMNqlLUP-aXAbeehvNQuS68Bqo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ilpj8cu7kuue1.png?width=108&crop=smart&auto=webp&s=f65656c165732ea7df4befdb9ad41673596b5de3', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ilpj8cu7kuue1.png?width=216&crop=smart&auto=webp&s=ae1ac760ed4b6f41e45513f304db106b89bf17a6', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ilpj8cu7kuue1.png?width=320&crop=smart&auto=webp&s=eb72e96efd794c04b926882a3bbbd5c5d97ceb6d', 'width': 320}], 'source': {'height': 959, 'url': 'https://preview.redd.it/ilpj8cu7kuue1.png?auto=webp&s=2977a208a5c2ebe4db9a00f2f065ba3827bcce39', 'width': 479}, 'variants': {}}]}
|
|||
OpenAI released a new Prompting Cookbook with GPT 4.1
| 286 | 2025-04-14T18:54:15 |
https://cookbook.openai.com/examples/gpt4-1_prompting_guide
|
Recoil42
|
cookbook.openai.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz6rwj
| false | null |
t3_1jz6rwj
|
/r/LocalLLaMA/comments/1jz6rwj/openai_released_a_new_prompting_cookbook_with_gpt/
| false | false | 286 |
{'enabled': False, 'images': [{'id': '1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=108&crop=smart&auto=webp&s=c0eb0bc1ecde6d397395231021e239a581fb8a86', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=216&crop=smart&auto=webp&s=54a5a235cdd54fcc021bb2f33d94ed65dbd93b7e', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=320&crop=smart&auto=webp&s=2522b9502417696305bdf30a64543e8558771200', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=640&crop=smart&auto=webp&s=a79822bf5ec27d84d21f68af9b0b6792aee1dada', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=960&crop=smart&auto=webp&s=73af1e8820b79f21b91f0dad47623779a6a727af', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?width=1080&crop=smart&auto=webp&s=7f38912b825372bb3ac711736de0c2cc7f9b1a32', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/khZZDaErUPszuYMOAnI0g6ZacxmX2AdST6xS5QZoW9g.jpg?auto=webp&s=69f2b0ac1543eef2739fe63f8603ebbc4cbe0631', 'width': 1200}, 'variants': {}}]}
|
||
New OpenAI models, cool. What about Quasar and Optimus?
| 0 |
If these were the openai models which was which?
| 2025-04-14T19:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz7296/new_openai_models_cool_what_about_quasar_and/
|
Echo9Zulu-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz7296
| false | null |
t3_1jz7296
|
/r/LocalLLaMA/comments/1jz7296/new_openai_models_cool_what_about_quasar_and/
| false | false |
self
| 0 | null |
Gemma Tool calling or separate small decision model
| 2 |
I'm retrieving context from several sources based on the user query. Gemma3 doesn't support tool calling natively with ollama, so I'm using gemma's 1b model to decide which context sources to feed to the larger model. So far, I've gotten pretty good results, but it's still slower and less accurate than I would like it to be.
If I were to find a way to add tool calling to the 12b model I'm using, how would speed and accuracy compare to using a separate decision model?
Appreciate the help!
| 2025-04-14T19:10:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz771y/gemma_tool_calling_or_separate_small_decision/
|
MiyamotoMusashi7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz771y
| false | null |
t3_1jz771y
|
/r/LocalLLaMA/comments/1jz771y/gemma_tool_calling_or_separate_small_decision/
| false | false |
self
| 2 | null |
Music Cover Voice Cloning: what’s the Current State?
| 7 |
Hey guys! Just writing here to see if anyone has some info about voice cloning for cover music. Last time I checked, I was still using RVC v2, and I remember it needed at least 10 to 30–40 minutes of dataset and then training before it was ready to use.
I was wondering if there have been any updates since then, maybe new models that sound more natural, are easier to train, or just better overall? I’ve been out for a while and would love to catch up if anyone’s got news. Thanks a lot!
| 2025-04-14T19:23:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz7in7/music_cover_voice_cloning_whats_the_current_state/
|
Eydahn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz7in7
| false | null |
t3_1jz7in7
|
/r/LocalLLaMA/comments/1jz7in7/music_cover_voice_cloning_whats_the_current_state/
| false | false |
self
| 7 | null |
Experimenting with A2A by porting an existing agent to use it
| 7 |
Looking at the official A2A [OSS repo](https://github.com/google/A2A) provided by Google, and trying to make sense of it.
So far I think the design makes sense. Definitely helpful to see the existing samples in the repo.
In case someone is interested, I have provided a summary of my experience from porting over one of my own sample agents [here](https://www.teachmecoolstuff.com/viewarticle/experimenting-with-a2a).
| 2025-04-14T19:26:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz7kpo/experimenting_with_a2a_by_porting_an_existing/
|
funJS
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz7kpo
| false | null |
t3_1jz7kpo
|
/r/LocalLLaMA/comments/1jz7kpo/experimenting_with_a2a_by_porting_an_existing/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'dx7XYxAab8pAM5iQlO2WL0Gw9UtbIqMs5VD7yZwkFP8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=108&crop=smart&auto=webp&s=ce010277f81dfdae6d6ba0de975388c594ed4704', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=216&crop=smart&auto=webp&s=76dc379ee38d30aa30e37b7a6e860a87ab94d517', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=320&crop=smart&auto=webp&s=3c99b65a75c00320736cee3238988ea947812890', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=640&crop=smart&auto=webp&s=c998f9a442c0a109ab51bc184098b755d516c160', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=960&crop=smart&auto=webp&s=da985df437540ccf98537afe6114d4f9311dbd90', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?width=1080&crop=smart&auto=webp&s=7653d0cd736e77a41d1fe89dd9a3be2a7b81984f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Lq4RsBfoQU8f8kV6kdBaVdwUvuXqpSMNODlElLRep0Q.jpg?auto=webp&s=91e364ba4bd828658cb723219dbf9ed85fe73ed9', 'width': 1200}, 'variants': {}}]}
|
Should assistants use git flow?
| 2 |
I'm currently using Claude Code, but also used cursor/windsurf.
Most of the times I feel that using this assistants is like working with a junior dev you are mentoring. You iterate reviewing its work.
It is very usual that I end up undoing some of the assistant code, or refactor it to merge some other feature I'm implementing at the same time.
If we think an assistant to be a coworker, then we should work in different branches and use whatever git flow you prefer to deal with the changes. Ideally the assistant creates PRs instead of changing directly your files.
Is anyone using assistants this way? Is there a wrapper over the current assistants to make them git aware?
| 2025-04-14T19:26:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz7kqj/should_assistants_use_git_flow/
|
itzco1993
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz7kqj
| false | null |
t3_1jz7kqj
|
/r/LocalLLaMA/comments/1jz7kqj/should_assistants_use_git_flow/
| false | false |
self
| 2 | null |
Llama 4 underperform on coding benchmark
| 1 |
[removed]
| 2025-04-14T19:31:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz7pq9/llama_4_underperform_on_coding_benchmark/
|
StableStack
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz7pq9
| false | null |
t3_1jz7pq9
|
/r/LocalLLaMA/comments/1jz7pq9/llama_4_underperform_on_coding_benchmark/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
I benchmarked 7 OCR solutions on a complex academic document (with images, tables, footnotes...)
| 170 |
I ran a **comparison of 7 different OCR solutions** using the [Mistral 7B paper](https://arxiv.org/pdf/2310.06825) as a reference document (pdf), which I found complex enough to properly stress-test these tools. It's the same paper used in the team's Jupyter notebook, but whatever. **The document includes footnotes, tables, figures, math, page numbers**,... making it a solid candidate to test how well these tools handle real-world complexity.
**Results (Ranked):**
1. **MistralAPI \[cloud\]** → **BEST**
2. **Marker + Gemini** (--use\_llm flag) **\[cloud\]** → **VERY GOOD**
3. **Marker / Docling \[local\]** → **GOOD**
4. **PyMuPDF4LLM \[local\]** → **OKAY**
5. **Gemini 2.5 Pro \[cloud\]** → **BEST\* (...but doesn't extract images)**
6. **Markitdown (without AzureAI) \[local\]** → **POOR\* (doesn't extract images)**
**OCR images to compare:**
[OCR comparison for: Mistral, Marker+Gemini, Marker, Docling, PyMuPDF4LLM, Gemini 2.5 Pro, and Markitdown](https://preview.redd.it/g0ihgjgpruue1.png?width=5738&format=png&auto=webp&s=94537b4d1073286c7570d8739c512bb43f4fd8aa)
**Links to tools:**
* [MistralOCR](https://mistral.ai/news/mistral-ocr)
* [Marker](https://github.com/VikParuchuri/marker)
* [Gemini 2.5 Pro](https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#gemini-2-5-pro)
* [Docling](https://github.com/docling-project/docling)
* [Markitdown](https://github.com/microsoft/markitdown)
* [PyMuPDF4LLM](https://pymupdf.readthedocs.io/en/latest/pymupdf4llm/)
| 2025-04-14T19:43:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz80f1/i_benchmarked_7_ocr_solutions_on_a_complex/
|
coconautico
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz80f1
| false | null |
t3_1jz80f1
|
/r/LocalLLaMA/comments/1jz80f1/i_benchmarked_7_ocr_solutions_on_a_complex/
| false | false | 170 | null |
|
Which video card for continued training (fine-tuning) and/or in-context learning (ICL)?
| 1 |
[removed]
| 2025-04-14T19:50:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz8650/which_video_card_for_continued_training/
|
Enfoldment
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz8650
| false | null |
t3_1jz8650
|
/r/LocalLLaMA/comments/1jz8650/which_video_card_for_continued_training/
| false | false |
self
| 1 | null |
Agentic QwQ-32B perfect bouncing balls
| 25 |
QwQ still full of surprises...
[https://github.com/ssakar/examples/tree/main/QwQ-32B](https://github.com/ssakar/examples/tree/main/QwQ-32B)
| 2025-04-14T20:07:57 |
https://youtube.com/watch?v=eBvKa4zaaCc&si=hEM-LF_p557bhgHz
|
Specific-Rub-7250
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz8m81
| false |
{'oembed': {'author_name': 'Serkan Sakar', 'author_url': 'https://www.youtube.com/@serkansakar94', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/eBvKa4zaaCc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Bouncing Balls with Agents QwQ 32B"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eBvKa4zaaCc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Bouncing Balls with Agents QwQ 32B', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
|
t3_1jz8m81
|
/r/LocalLLaMA/comments/1jz8m81/agentic_qwq32b_perfect_bouncing_balls/
| false | false | 25 |
{'enabled': False, 'images': [{'id': 'm30fPgPEO62QQJz0AXNLhvnSgOn2khxbv0TIgVTS-Dk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Py9phHWcxHcbI8kJ7_sE7LFb3Dt1cHJ_22Az93dDwZI.jpg?width=108&crop=smart&auto=webp&s=46540805b0efcbca5e80f46e97c55bc3eb496dfc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Py9phHWcxHcbI8kJ7_sE7LFb3Dt1cHJ_22Az93dDwZI.jpg?width=216&crop=smart&auto=webp&s=a5646e43f7f6fa63ebe2424c4778df1d5147a317', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Py9phHWcxHcbI8kJ7_sE7LFb3Dt1cHJ_22Az93dDwZI.jpg?width=320&crop=smart&auto=webp&s=4f273eef5ca122fd23b45012f802218030085b0b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Py9phHWcxHcbI8kJ7_sE7LFb3Dt1cHJ_22Az93dDwZI.jpg?auto=webp&s=12f6ba1c4e5f722151f0ef47b1681a1c981f3f54', 'width': 480}, 'variants': {}}]}
|
|
meshgen: AI Agents directly in Blender
| 12 |
This addon is intended to be kind of like a Blender copilot. Some more info:
* Uses [smolagents](https://github.com/huggingface/meshgen) with local models (`llama_cpp_python`, `ollama`) or remote APIs (`Hugging Face`, `Anthropic`, `OpenAI`)
* Supports a variety of tools similar to [blender-mcp](https://github.com/ahujasid/blender-mcp)
* Open source and running entirely within Blender
Right now, it works best when using a big model like Claude 3.7, and blocking out basic scenes using primitives.
There is an optional [LLaMA-Mesh](https://github.com/nv-tlabs/LLaMA-Mesh) integration for local mesh generation and understanding. The quality isn't great right now, but I think this more collaborative/iterative approach really exciting, kind of like the Cursor treatment for Blender (as things improve in 3D)!
| 2025-04-14T20:11:51 |
https://github.com/huggingface/meshgen
|
individual_kex
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz8pte
| false | null |
t3_1jz8pte
|
/r/LocalLLaMA/comments/1jz8pte/meshgen_ai_agents_directly_in_blender/
| false | false |
default
| 12 | null |
OpenAI - Wen open source tho?
| 30 |
What do you think, will an OpenAI model really see the light of day soon enough? Do we have any info on when that could be?
| 2025-04-14T20:12:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz8q7a/openai_wen_open_source_tho/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz8q7a
| false | null |
t3_1jz8q7a
|
/r/LocalLLaMA/comments/1jz8q7a/openai_wen_open_source_tho/
| false | false |
self
| 30 | null |
Optimus is gpt-4.1, but quasar is *not* gpt-4.1-mini or nano. So, where & what is quasar?
| 4 |
See pics for the evidence collected thus far. The hierarchical tree is generated from the model's slop profile (tendency to over-represent particular words/phrases). It isn't foolproof but I think it's at least indicative that quasar-alpha and gpt-4o-mini may be a slightly different lineage or architecture.
The performance on benchmarks suggests gpt-4o-mini is a smaller model.
Benchmarks: [https://eqbench.com/creative\_writing.html](https://eqbench.com/creative_writing.html)
Sample writing:
[https://eqbench.com/results/creative-writing-v3/gpt-4.1-mini.html](https://eqbench.com/results/creative-writing-v3/gpt-4.1-mini.html)
[https://eqbench.com/results/creative-writing-v3/quasar-alpha.html](https://eqbench.com/results/creative-writing-v3/quasar-alpha.html)
What's your speculation?
| 2025-04-14T20:14:13 |
https://www.reddit.com/gallery/1jz8rwr
|
_sqrkl
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz8rwr
| false | null |
t3_1jz8rwr
|
/r/LocalLLaMA/comments/1jz8rwr/optimus_is_gpt41_but_quasar_is_not_gpt41mini_or/
| false | false | 4 | null |
|
Coding-Centric LLM Benchmark: Llama 4 Underwhelms
| 62 |
We wanted to see for ourselves what Llama 4's performances for coding were like, and we were not impressed. Here is the benchmark methodology:
* We sourced 100 issues labeled "bug" from the Mastodon GitHub repository.
* For each issue, we collected the description and the associated pull request (PR) that solved it.
* For benchmarking, we fed models each bug description and 4 PRs to choose from as the answer, with one of them being the PR that solved the issue—no codebase context was included.
**Findings**:
First, we wanted to test against leading multimodal models and replicate Meta's findings. Meta found in its benchmark that Llama 4 was beating GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding.
We could not reproduce Meta’s findings on Llama outperforming GPT-4o, Gemini 2.0 Flash, and DeepSeek v3.1. On our benchmark, it came last in accuracy (69.5%), 6% less than the next best performing model (DeepSeek v3.1) and 18% behind the overall top-performing model (GPT-4o).
Second, we wanted to test against models designed for coding tasks: Alibaba Qwen2.5-Coder, OpenAI o3-mini, and Claude 3.5 Sonnet. Unsurprisingly, Llama 4 Maverick achieved only a 70% accuracy score. Alibaba’s Qwen2.5-Coder-32B topped our rankings, closely followed by OpenAI's o3-mini, both of which achieved around 90% accuracy.
Llama 3.3 70 B-Versatile even outperformed the latest Llama 4 models by a small yet noticeable margin (72% accuracy).
Are those findings surprising to you? Any benchmark methodology details that may be disadvantageous to Llama models?
We shared the full findings here [https://rootly.com/blog/llama-4-underperforms-a-benchmark-against-coding-centric-models](https://rootly.com/blog/llama-4-underperforms-a-benchmark-against-coding-centric-models)
And the dataset we used for the benchmark if you want to replicate or look closer at the dataset [https://github.com/Rootly-AI-Labs/GMCQ-benchmark](https://github.com/Rootly-AI-Labs/GMCQ-benchmark)
| 2025-04-14T20:29:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz95oz/codingcentric_llm_benchmark_llama_4_underwhelms/
|
jj_at_rootly
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz95oz
| false | null |
t3_1jz95oz
|
/r/LocalLLaMA/comments/1jz95oz/codingcentric_llm_benchmark_llama_4_underwhelms/
| false | false |
self
| 62 | null |
I'm on the waitlist for @perplexity_ai's new agentic browser, Comet:
| 0 | 2025-04-14T20:39:40 |
https://www.perplexity.ai/comet
|
I_aint_a_wallflower
|
perplexity.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz9eqq
| false | null |
t3_1jz9eqq
|
/r/LocalLLaMA/comments/1jz9eqq/im_on_the_waitlist_for_perplexity_ais_new_agentic/
| false | false |
default
| 0 | null |
|
Can I use RTX 3060 + RTX 3080 together?
| 0 |
Hello,
I do have RTX 3080 (10GB) now and would like to use cheap 3060 12GB for 22GB vRAM, is it possible?
| 2025-04-14T20:49:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz9n5l/can_i_use_rtx_3060_rtx_3080_together/
|
Adam1394
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz9n5l
| false | null |
t3_1jz9n5l
|
/r/LocalLLaMA/comments/1jz9n5l/can_i_use_rtx_3060_rtx_3080_together/
| false | false |
self
| 0 | null |
IBM Power8 CPU?
| 1 |
Howdy! I know someone selling some old servers from a local DC and one is a dual socket IBM Power8 with 4x p100s. My mouth was watering with 32 memory channels per CPU but I'm not sure if anything supports the Power series CPU architecture?
Anyone get a Power series CPU running effectively?
Note: I'm a windows native and developer but love to tinker if that means I can get this beast running.
| 2025-04-14T21:18:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzabwi/ibm_power8_cpu/
|
An_Original_ID
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzabwi
| false | null |
t3_1jzabwi
|
/r/LocalLLaMA/comments/1jzabwi/ibm_power8_cpu/
| false | false |
self
| 1 | null |
Sesame csm-1b
| 0 |
Hey guys I have been playing a little with this model but the generated audio takes some time for me with an rtx 3090, audio of about 20sec, takes around 40-60sec.
I wanted to know if you guys have tried this model and managed to get a better result? I'm trying to get as close to realtime gen.
| 2025-04-14T21:25:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzahgk/sesame_csm1b/
|
brocolongo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzahgk
| false | null |
t3_1jzahgk
|
/r/LocalLLaMA/comments/1jzahgk/sesame_csm1b/
| false | false |
self
| 0 | null |
gpt4.1 still (a little bit) behind gpt4o on Brazilian Legal Benchmark
| 1 |
[removed]
| 2025-04-14T21:25:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzahy7/gpt41_still_a_little_bit_behind_gpt4o_on/
|
celsowm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzahy7
| false | null |
t3_1jzahy7
|
/r/LocalLLaMA/comments/1jzahy7/gpt41_still_a_little_bit_behind_gpt4o_on/
| false | false | 1 | null |
|
Three reasoning workflows - Tri, Grug, Polyglot
| 29 |
Here's a small demo of the workflows in action:
[https://youtu.be/PZDU9MpVYP8](https://youtu.be/PZDU9MpVYP8)
(Very sorry for a YouTube link, there was no way to add a native Reddit video to an image post)
In general, all three are directed at enclosing or redirecting the activation space during inference to be different from the most typical examples seen during the pre-training.
Code:
* [Tri](https://github.com/av/harbor/blob/main/boost/src/custom_modules/tri.py)
* [Grug](https://github.com/av/harbor/blob/main/boost/src/custom_modules/grug.py)
* [Polyglot](https://github.com/av/harbor/blob/main/boost/src/custom_modules/polyglot.py)
| 2025-04-14T21:57:43 |
https://www.reddit.com/gallery/1jzb7u7
|
Everlier
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzb7u7
| false | null |
t3_1jzb7u7
|
/r/LocalLLaMA/comments/1jzb7u7/three_reasoning_workflows_tri_grug_polyglot/
| false | false | 29 | null |
|
Finetune text to image models
| 1 |
[removed]
| 2025-04-14T22:03:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzbcgb/finetune_text_to_image_models/
|
Legal_Dragonfruit_84
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzbcgb
| false | null |
t3_1jzbcgb
|
/r/LocalLLaMA/comments/1jzbcgb/finetune_text_to_image_models/
| false | false |
self
| 1 | null |
What can be built on a $30k budget?
| 1 |
Hi all,
In doing some comparisons (and reading comments here) I'm kinda convinced for homelab/hobby use, it's actually more cost effective to purchase hardware than go with cloud gpus. What I've been struggling with is which road to go down: cpu/ram or gpu/vram.
It seems that in order to do something like the full DeepSeek R1 at fp8 I'd basically have to go the cpu/ram route since building something capable of fully loading the model into vram is \_still\_ out of budget... Right now I avg. about 35 tok/s on inference and something like 9 tok/s on parsing (just 1x4090) with deepseek r1 32b 4bit.
I guess what I'm trying to figure out is, given the inference perf. i'm desiring, coupled with being able to load and run "large" models (maybe i actually don't need to run the 671b model and something in the 70b range is completely sufficient for good results?), have "good enough" parse tok/s (ideally faster than a maxed out Mac Studio), what would the ideal hardware setup look like with a $30k budget?
Main use-cases are really just around inference/asking random things related to coding for the most part but also want to be able to swap models out as the need arises..
| 2025-04-14T22:12:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzbk53/what_can_be_built_on_a_30k_budget/
|
Andrew_sc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzbk53
| false | null |
t3_1jzbk53
|
/r/LocalLLaMA/comments/1jzbk53/what_can_be_built_on_a_30k_budget/
| false | false |
self
| 1 | null |
Finally finished my "budget" build
| 268 |
# Hardware
* 4x EVGA RTX 3090 FTW3 Ultra (24G-P5-3987-KR)
* AMD EPYC 7302P
* 16 Cores 32 Threads
* 3.0GHz Base 3.3GHz Boost
* AMD Socket SP3
* Asrock Rack ROMED6U-2L2T
* 2TB Samsung 980 Pro
* Memory: 6x 16gb DDR4 2933 MHz
* MLACOM Quad Station PRO LITE v.3 ([link](https://www.mlacom.si/komponente/ohisja/i_2849344_mlacom-quad-station-pro-lite-black-edition-v-3-ml-qspl-b-v3))
* GPU Risers cables
* 1x LINKUP - AVA5 PCIE 5.0 Riser Cable - Straight (v2) - 25cm ([link](https://www.amazon.com/dp/B0D5F8KBQR))
* 1/2x Okinos - PCI-E 4.0 Riser Cable - 200mm - Black ([link](https://www.amazon.com/dp/B0C22WJTMB))
* One of these actually died and was replaced by the above LINKUP cable. 200mm was a little short for the far GPU so if you decide to go with the Okinos risers make sure you swap one for a 300mm
* 2x Okinos - PCI-E 4.0 Riser Cable - 150mm - Black ([link](https://www.amazon.com/dp/B0CNNJHK93))
* They sent the white version instead.
* 2x Corsair RM1200x Shift Fully Modular ATX Power Supply (Renewed) ([link](https://www.amazon.com/dp/B0DD5TWT1L))
* 1x Dual PSU ATX Power Supply Motherboard Adapter Cable ([link](https://www.amazon.com/dp/B07543LNRH))
# Cost
* GPUs - $600/ea x 4 - $2400
* Motherboard + CPU + Memory (came with 64gb) + SSD from a used Ebay listing (plus some extra parts that I plan on selling off) - $950
* Case - $285
* Risers - LINKUP $85 + Okinos $144 - Total $229
* Power Supplies - $300
* Dual Power Supply Adapter Cable - $10
* Additional Memory (32gb) - $30
* Total - $4204
| 2025-04-14T22:53:27 |
C_Coffie
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzcgy1
| false | null |
t3_1jzcgy1
|
/r/LocalLLaMA/comments/1jzcgy1/finally_finished_my_budget_build/
| false | false | 268 |
{'enabled': True, 'images': [{'id': '1ACm1J0Z0eFLMgR6kwntNF9fsTWm8gLukqFxljJlSXw', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=108&crop=smart&auto=webp&s=be4a9e6e306490fedab60e6db6035c4411ddf804', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=216&crop=smart&auto=webp&s=c021724c0542a6c5f3a8a21f55f280d2848b70f2', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=320&crop=smart&auto=webp&s=64fef77760646f41920312b26c1996373d0da63a', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=640&crop=smart&auto=webp&s=c609ca13e495685e844a75eac7fefbca17e49819', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=960&crop=smart&auto=webp&s=efa06641c22f9a60bd7798a666c58444caca7216', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?width=1080&crop=smart&auto=webp&s=1abdf578b4293a333218b0c3324afc7e7923bf94', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/yes9qjnnqvue1.jpeg?auto=webp&s=6cc88d18630609b7a0e31d54cdd2105e5bb87c03', 'width': 4096}, 'variants': {}}]}
|
||
Hugging Face Optimum now supports ExecuTorch
| 7 |
You can now easily transform a Hugging Face model to [PyTorch/ExecuTorch](https://github.com/pytorch/executorch/) for running LLMs on mobile/embedded devices
[Optimum ExecuTorch](https://huggingface.co/docs/optimum-executorch/index) enables efficient deployment of transformer models using PyTorch’s ExecuTorch framework. It provides:
* 🔄 Easy conversion of Hugging Face models to ExecuTorch format
* ⚡ Optimized inference with hardware-specific optimizations
* 🤝 Seamless integration with Hugging Face Transformers
* Efficient deployment on various devices
Install
git
clone
https://github.com/huggingface/optimum-executorch.git
cd
optimum-executorch
pip install .
Exporting a Hugging Face model for ExecuTorch
optimum-cli
export
executorch --model meta-llama/Llama-3.2-1B --recipe xnnpack --output_dir meta_llama3_2_1b_executorch
# Running the Model
from optimum.executorch import ExecuTorchModelForCausalLM
from transformers import AutoTokenizer
model_id = "meta-llama/Llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = ExecuTorchModelForCausalLM.from_pretrained(model_id)
#
[Optimum Code](https://github.com/huggingface/optimum-executorch)
| 2025-04-14T22:54:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzchof/hugging_face_optimum_now_supports_executorch/
|
Vegetable_Sun_9225
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzchof
| false | null |
t3_1jzchof
|
/r/LocalLLaMA/comments/1jzchof/hugging_face_optimum_now_supports_executorch/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'GPN5rXvo2vDVcpMTXFs-Ppq_QWNXUWJo9Z8YKeAk7sM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=108&crop=smart&auto=webp&s=574ce12410fb6adb15ce5f813d4556f143292789', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=216&crop=smart&auto=webp&s=94638869c2135ad2279364b694585f19f1e82811', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=320&crop=smart&auto=webp&s=baafff77a55290697954b0540c0601db776f39db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=640&crop=smart&auto=webp&s=628f8c65eb29ab6f860de7875a6cc782294c7fd2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=960&crop=smart&auto=webp&s=07b0d93ea721a5d8cc4570db2598df1d42cef42c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?width=1080&crop=smart&auto=webp&s=2a83ff3c904c400e22c9e9fa83b4f6ea8d730016', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vkg7_yoEh_Hngu70DOJro54EsTAnDsGvFR84gYZOBT4.jpg?auto=webp&s=c6989db4312f79efb2df4eb98db4079e86e722e8', 'width': 1200}, 'variants': {}}]}
|
If I use Llama for my company internal chat am I cooked?
| 0 |
I noticed the Llama license is very confusing. They do not explicitly claim for no commercial use, but give some hints here and there like someone saying "maybe you could use my product, maybe you don't, who knows, watch out bro *wink*".
This results in claims that any comercial or non-open-source use = sued by Meta.
Others claim there is no issue whatsoever unless you're a Big Corp™ that poses direct threat to Meta.
Do you guys know who's right and if I'm cooked if I use it in my company (which certainly ain't at Big Corp™ level)?
| 2025-04-14T22:55:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzci5s/if_i_use_llama_for_my_company_internal_chat_am_i/
|
calashi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzci5s
| false | null |
t3_1jzci5s
|
/r/LocalLLaMA/comments/1jzci5s/if_i_use_llama_for_my_company_internal_chat_am_i/
| false | false |
self
| 0 | null |
Adding a second GPU or replace it?
| 3 |
So my current setup is an old gtx 1080.
I plan to buy a 3080 or 3090.
Should I add it and use both or the difference in performance between the 2 would be too much and should use only the newest one?
Thanks
| 2025-04-14T23:06:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzcr6e/adding_a_second_gpu_or_replace_it/
|
Dentifrice
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzcr6e
| false | null |
t3_1jzcr6e
|
/r/LocalLLaMA/comments/1jzcr6e/adding_a_second_gpu_or_replace_it/
| false | false |
self
| 3 | null |
Training for agentic capabilities will most likely be very fruitful
| 1 |
Models start off as pretrained predictors of language, and the purpose of the post training phase is to encourage the model to elicit the innate skills that this model has learnt through its pretraining towards a directed purpose (chatbots, agents, CoT reasoners.)
I say elicit rather than learn because the model can be made to exhibit these skills with an astronomically smaller amount of training data than the pretraining phase ( see: [https://wandb.ai/byyoung3/ml-news/reports/S1-Achieving-Test-Time-Scaling-with-Just-1-000-Examples---VmlldzoxMTIxNjc3Nw](https://wandb.ai/byyoung3/ml-news/reports/S1-Achieving-Test-Time-Scaling-with-Just-1-000-Examples---VmlldzoxMTIxNjc3Nw) where CoT abilities were elicited with just 1000 examples).
Now I say that because something on the OpenAI prompting guide ( [https://cookbook.openai.com/examples/gpt4-1\_prompting\_guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide) ) caught my eye, apparently just by prompting the model to act as an agent, you can get it to be 20% better at SWE, which is kinda mad. This indicates to me a powerful innate ability to perform agentic, long horizon tasks, that is somewhat unveiled by prompting the model in this way.
Based off of how it worked with CoT, prompting a model to change its behaviour is no substitute for actually RL training the model to behave as you want (which makes sense theoretically as well) so if a good RL scheme is found for agentic abilities (probably not too hard but def very compute intensive) the evidence points to agentic capabilities being greatly enhanced, not just marginally.
| 2025-04-14T23:41:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzdigy/training_for_agentic_capabilities_will_most/
|
JohnnyLiverman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzdigy
| false | null |
t3_1jzdigy
|
/r/LocalLLaMA/comments/1jzdigy/training_for_agentic_capabilities_will_most/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '5KdweIdpkZNUpImFAI957DcI8sdfHZxzZc91lprSuBA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=108&crop=smart&auto=webp&s=976c80388c5cc130d858bcb78b0e344a46d232c4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=216&crop=smart&auto=webp&s=1fb375c3fb11dcab79db7220e2666b8b1d83ce88', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=320&crop=smart&auto=webp&s=001c2963240bad858778cf415949d89795a35210', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?width=640&crop=smart&auto=webp&s=261c9deb7e6c15da8d3149302d6c95caaa3a9257', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/5ktc8i_UgLpsS64k6QxM8BaTMR-mr5YeXwcbjTWWHHU.jpg?auto=webp&s=f56c07642fbcaa0cfd68f0d3d45e630518610b54', 'width': 900}, 'variants': {}}]}
|
DGX Spark(Ascent GX10) vs M3 Ultra 256GB
| 1 |
[removed]
| 2025-04-14T23:47:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzdn97/dgx_sparkascent_gx10_vs_m3_ultra_256gb/
|
CombinationEnough314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzdn97
| false | null |
t3_1jzdn97
|
/r/LocalLLaMA/comments/1jzdn97/dgx_sparkascent_gx10_vs_m3_ultra_256gb/
| false | false |
self
| 1 | null |
How many tok/s is enough?
| 7 |
HI! I'm exploring different options for local LLM hosting and wanted to ask a few questions to the community:
1) How many tokens per second do you consider acceptable? How slow can a model be before you switch to a smaller model? Does this vary by use case?
2) Whats your current go to model (incl. quant)?
3) Whats hardware are you running this on? How much did the setup cost and how many tok/sec do you get?
Interested in partial answers too if you don't want to answer all three questions.
Thanks!
| 2025-04-15T00:14:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jze7v5/how_many_toks_is_enough/
|
evil0sheep
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jze7v5
| false | null |
t3_1jze7v5
|
/r/LocalLLaMA/comments/1jze7v5/how_many_toks_is_enough/
| false | false |
self
| 7 | null |
The real cost of hosting an LLM
| 0 |
**Disclaimer before diving in**: I hope we missed something and that we're wrong about some of our assumptions and someone here can help us figure out ways to improve our approach. I've basically become a skeptic that private LLMs can be of much use for anything but basic tasks (which is fine for private usage and workflows and I totally get that), but I'm 100% willing to change my mind.
\_\_\_
We've been building a B2B AI product and kept running into the "we need our sensitive data kept private, can we self-host the LLM?" question, especially from enterprise clients in regulated fields. So we went ahead and deployed a private LLM and integrated it with our product.
Sharing our findings because the reality was pretty eye-opening, especially regarding costs and performance trade-offs compared to commercial APIs.
**The TL;DR:** Going private for data control comes at a *massive* cost premium and significant performance hit compared to using major API providers (OpenAI, Anthropic, Google). This is kind of obvious, but the gap was stunning to me. We're still doing this for some of our clients, but it did leave us with more questions than answers about the economics, and I'm actually really eager to hear what other have found.
This is roughly the thought process and steps we went through:
1. **Our use case:** We needed specific features like function calling and support for multi-step agentic workflows. This immediately ruled out some smaller/simpler models that didn't have native tool calling support. It's also worth noting that because of the agentic nature of our product, the context is incredibly variable and can quickly grow if the AI is working on a complex task.
2. **The hardware cost:** We looked at models like Qwen-2.5 32B, QwQ 32B and Llama-3 70B.
* **Qwen-2.5 32B or QwQ 32B:** Needs something like an AWS g5.12xlarge (4x A10G) instance. Cost: **\~$50k/year** (running 24/7).
* **Llama-3 70B:** Needs a beefier instance like p4d.24xlarge (8x A100). Cost: **\~$287k/year** (running 24/7).
* (We didn't even bother pricing out larger models after seeing this).
* We're keeping our ears to the ground for new and upcoming open source models
3. **Performance gap:** Even paying \~$50k/year for the private QwQ model, benchmarks clearly show a huge difference between say Gemini 2.5-pro and these models. This is pretty obvious, but beyond the benchmarks, from playing around with QwQ quite a bit on heavy-duty data analysis use cases, I can just say that it felt like driving a Prius vs a model plaid S3.
4. **Concurrency is tricky:** Larger models (30B+) are generally more capable but much slower. Running multiple users concurrently can quickly create bottlenecks or require *even more* hardware, driving costs higher. Smaller models are faster but less capable. We don't have a ton of literal concurrent usage of a same model in a same org (we may have more than one user in an org using the AI at the same time, but it's rarely at the exact same minute). Even without concurrent usage though, it feels much slower...
5. **Some ideas we've implemented or are considering:**
* Spinning instances up/down instead of 24/7 (models take a few mins to load).
* Smarter queuing and UI feedback to deal with the higher latency
* Aggressive prompt engineering (managing context window size, reducing chattiness like we found with QwQ). We've tried very hard to get QwQ to talk less, to no avail. And unfortunately it means that it uses up its own context very quickly, so we're exploring ways to reduce the context that we provide. But this comes at an accuracy hit.
* Hoping models get more efficient *fast*. Generally time is our friend here, but there's probably some limit to how good models can get on "small" compute instance.
This is basically where I've landed for now: Private LLMs are incredibly expensive, much worse and much slower than hosted LLMs. The gap feels so wide to me that I've started laying this out very very clearly for our enterprise customers making sure they understand what they're paying for both in terms of performance and cost for the added privacy. If I were to make a big bet: all but the most extreme privacy-minded companies will go deep on a specific LLM provider and most SaaS providers will have to be able to support any LLM vs privately hosted LLMs. We've done a lot of work to remain LLM-agnostic and this has reinforced my conviction in our approach on this front.
Side note: I can't quite wrap my head around how much cash major LLM providers are burning every day. It feels to me like we're in the days when you could take an Uber to cross SF for $5. Or maybe the economies of scale work for them in a way that doesn't for someone outsourcing compute.
**Would love to know if there's something you've tried that has worked for you or something we may have not considered!**
| 2025-04-15T00:36:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzeo0l/the_real_cost_of_hosting_an_llm/
|
full_arc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzeo0l
| false | null |
t3_1jzeo0l
|
/r/LocalLLaMA/comments/1jzeo0l/the_real_cost_of_hosting_an_llm/
| false | false |
self
| 0 | null |
Added GPT-4.1, Gemini-2.5-Pro, DeepSeek-V3-0324 etc...
| 415 |
Due to resolution limitations, this demonstration only includes the top 16 scores from my KCORES LLM Arena. Of course, I also tested other models, but they didn't make it into this ranking.
The prompt used is as follows:
Write a Python program that shows 20 balls bouncing inside a spinning heptagon:
- All balls have the same radius.
- All balls have a number on it from 1 to 20.
- All balls drop from the heptagon center when starting.
- Colors are: #f8b862, #f6ad49, #f39800, #f08300, #ec6d51, #ee7948, #ed6d3d, #ec6800, #ec6800, #ee7800, #eb6238, #ea5506, #ea5506, #eb6101, #e49e61, #e45e32, #e17b34, #dd7a56, #db8449, #d66a35
- The balls should be affected by gravity and friction, and they must bounce off the rotating walls realistically. There should also be collisions between balls.
- The material of all the balls determines that their impact bounce height will not exceed the radius of the heptagon, but higher than ball radius.
- All balls rotate with friction, the numbers on the ball can be used to indicate the spin of the ball.
- The heptagon is spinning around its center, and the speed of spinning is 360 degrees per 5 seconds.
- The heptagon size should be large enough to contain all the balls.
- Do not use the pygame library; implement collision detection algorithms and collision response etc. by yourself. The following Python libraries are allowed: tkinter, math, numpy, dataclasses, typing, sys.
- All codes should be put in a single Python file.
| 2025-04-15T00:49:19 |
https://v.redd.it/4l29hha7bwue1
|
Dr_Karminski
|
/r/LocalLLaMA/comments/1jzexz7/added_gpt41_gemini25pro_deepseekv30324_etc/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzexz7
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4l29hha7bwue1/DASHPlaylist.mpd?a=1747399770%2CMDM1MDk0ZjNlZjZhODdmNTEzMjIzY2U4NDdjMmZlYjY5YjdhYmQ5MTA5ZTkwNWVjMDE3MjIzOWZmZWZhNTNmMw%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/4l29hha7bwue1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4l29hha7bwue1/HLSPlaylist.m3u8?a=1747399770%2CMTliZWU0N2Q2OTlmMjAxZDI1OTYzMThmZWNkZWJmOTk5YzkxNTI4OWRjYTE0NGE2NjM1ZmIzNjExN2QyODBiMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4l29hha7bwue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jzexz7
|
/r/LocalLLaMA/comments/1jzexz7/added_gpt41_gemini25pro_deepseekv30324_etc/
| false | false | 415 |
{'enabled': False, 'images': [{'id': 'MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=108&crop=smart&format=pjpg&auto=webp&s=39c003da229b8d9a2c0c36053f0c9f3f250f32c8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=216&crop=smart&format=pjpg&auto=webp&s=c18489ce236316a2e41b1b034fb0817bf41839bc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=320&crop=smart&format=pjpg&auto=webp&s=9979c584cbd98341da421523893020e6e9412df1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=640&crop=smart&format=pjpg&auto=webp&s=563229f3bdcb87fb1b51bff7bebaee96f80021c4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=960&crop=smart&format=pjpg&auto=webp&s=6c53eea0e7d3a9d8b1553a76322fca756b82dde2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=136d46bd96a38267c601cae935ea8dc789899406', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MzZxeWhkYTdid3VlMQu6kfIQ2qvjZubTK1d4oAMWlE_XwqBzgnAgfYZK7ysP.png?format=pjpg&auto=webp&s=8ef23a146c83b03f4cb2d46d639bf3dabc877eae', 'width': 1920}, 'variants': {}}]}
|
|
Mac Studio vs. NVIDIA GPUs, pound for pound comparison for training & inferencing
| 1 |
I am interested in either getting a mac studio with higher specs or building a gpu workstation with 2-3 gpus (options are NVIDIA A6000, 6000 Ada or similar >= 32GB vram gpus). I often see the gpus being benchmarked on compared to each other in charts, but where does mac chips stack up in comparison ? Are they not even in the same league as the options I listed above? If not, what would they be more comparable to in the NVIDIA gpu family?
I am aware that mac studios are a different paradigm with the unified memory and all etc, and as a preempt, I can understand that more often than not, the answer is "it depends". I am ultimately interested in training models for research purposes, finetuning >= 7b models, and inferencing with models with <= 100b parameters. What would be the comparison for training and/or inferencing for mac vs. external nvidia gpus?
| 2025-04-15T00:51:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzezim/mac_studio_vs_nvidia_gpus_pound_for_pound/
|
Strong-Net4501
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzezim
| false | null |
t3_1jzezim
|
/r/LocalLLaMA/comments/1jzezim/mac_studio_vs_nvidia_gpus_pound_for_pound/
| false | false |
self
| 1 | null |
Visual / Multimodal reasoning benchmarks
| 4 |
Hi,
I have a project where I am working with real world images and asking questions with a multimodal input model to identify objects. Is there a relevant benchmark (and questions) I can refer to? The closest I found was MMMU which has questions not quite of real-world imaginary but is more about OCR and relevant details from science and other fields. VQAv2 is another one but seems like has been not updated for a few years and no leaderboards exist on it. It feels more relevant but not much since 2017 on it.
Any other I should look at that have active leaderboards?
Thank you.
| 2025-04-15T01:32:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzft1h/visual_multimodal_reasoning_benchmarks/
|
World_of_Reddit_21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzft1h
| false | null |
t3_1jzft1h
|
/r/LocalLLaMA/comments/1jzft1h/visual_multimodal_reasoning_benchmarks/
| false | false |
self
| 4 | null |
Try to offend /r/LocalLlama in one sentence
| 1 | 2025-04-15T01:46:12 |
ForsookComparison
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzg2mr
| false | null |
t3_1jzg2mr
|
/r/LocalLLaMA/comments/1jzg2mr/try_to_offend_rlocalllama_in_one_sentence/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'CSJfpjUCgQWzKOHO92kmtYoaXihcfgxBKcQKmciuNj0', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/j2yndddelwue1.jpeg?width=108&crop=smart&auto=webp&s=8b760f105d82ded019c3866ce8cb61b54cd0bfd4', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/j2yndddelwue1.jpeg?width=216&crop=smart&auto=webp&s=050bb19d7dd607e2b0247be4666ee75c51847f02', 'width': 216}, {'height': 319, 'url': 'https://preview.redd.it/j2yndddelwue1.jpeg?width=320&crop=smart&auto=webp&s=d68fef2091dda8229fc17a903eb0e93834103e86', 'width': 320}], 'source': {'height': 480, 'url': 'https://preview.redd.it/j2yndddelwue1.jpeg?auto=webp&s=9e4b89401a0da576ed414f591a83a294380d339d', 'width': 481}, 'variants': {}}]}
|
|||
Best STT Computer Control?
| 1 |
What's the best STT computer control set up out there?
I can't be fucked to type into the computer all day.
We are at the point where I need to say pull this open and it opens the app. Are there any low level systems that achieve this? If so drop a repo.
If not I will build myself but looking for a better option.
| 2025-04-15T01:47:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzg3p3/best_stt_computer_control/
|
Fun_Yam_6721
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzg3p3
| false | null |
t3_1jzg3p3
|
/r/LocalLLaMA/comments/1jzg3p3/best_stt_computer_control/
| false | false |
self
| 1 | null |
I built lazyollama: a terminal interface to manage your Ollama chats more easily (open source, Go)
| 1 |
[removed]
| 2025-04-15T01:51:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzg5yc/i_built_lazyollama_a_terminal_interface_to_manage/
|
DTostes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzg5yc
| false | null |
t3_1jzg5yc
|
/r/LocalLLaMA/comments/1jzg5yc/i_built_lazyollama_a_terminal_interface_to_manage/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ez4-fpaEJCfD046Ky7sXlS9wufww4G7WFGc4NzOcWfQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=108&crop=smart&auto=webp&s=beab07b803a485d55ba513ffdac1658f1c47e4b2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=216&crop=smart&auto=webp&s=59e9e177a3ce169e084275c5dbd2c4fc8042c323', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=320&crop=smart&auto=webp&s=98fa367dc7c781935679f7b5d9055754629a25aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=640&crop=smart&auto=webp&s=0e6d6730307d1e7aca4d8e58442bfb1249b1459c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=960&crop=smart&auto=webp&s=39ca1991de2c36959082176f90a2b85863dc44ee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?width=1080&crop=smart&auto=webp&s=843868f98b4840d027d92a372e7610d62281d771', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WSomngM6jpWDdz5lzNuyoIs6TcTgzm1IDhI6po954_o.jpg?auto=webp&s=27219ebb7a00a6879b502402077e6e44cb1239ad', 'width': 1200}, 'variants': {}}]}
|
Creative Writing Setup: MacBook Pro vs Mac Studio vs 4090/5090 Build
| 1 |
[removed]
| 2025-04-15T02:18:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzgpw9/creative_writing_setup_macbook_pro_vs_mac_studio/
|
Such_Librarian9515
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzgpw9
| false | null |
t3_1jzgpw9
|
/r/LocalLLaMA/comments/1jzgpw9/creative_writing_setup_macbook_pro_vs_mac_studio/
| false | false |
self
| 1 | null |
Creative Writing Setup: MacBook Pro vs Mac Studio vs 4090/5090 Build
| 1 |
[removed]
| 2025-04-15T02:22:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzgsry/creative_writing_setup_macbook_pro_vs_mac_studio/
|
Such_Librarian9515
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzgsry
| false | null |
t3_1jzgsry
|
/r/LocalLLaMA/comments/1jzgsry/creative_writing_setup_macbook_pro_vs_mac_studio/
| false | false |
self
| 1 | null |
Creative Writing Setup: MacBook Pro vs Mac Studio vs 4090/5090 Build
| 0 |
I've been researching for the last month and keep coming back to these three options. Could you guys suggest one (or a combination?) that would best fit my situation.
• M4 Max Macbook Pro 128 GB 2TB
• Mac Studio
• RTX 4090 or 5090 custom build
I already own all apple products, so that is a consideration, but definitely not a dealbreaker!
I mainly use my computer for creative writing (which is what this will primarily be used for). Prose and character depth are *extremely* important to me, so I've been eyeing the larger LLMs for consistency, quality and world building. (Am I right to assume the bigger models are better for that?)
I don't code, but I also do a bit of photo and video editing on the side (just for fun). I've scraped and saved some money to finally upgrade (my poor 8 yr old Dell is seriously dragging, even with Gemini)
Any advice would be greatly appreciated!
| 2025-04-15T02:24:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzgtuw/creative_writing_setup_macbook_pro_vs_mac_studio/
|
Accomplished_Tear436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzgtuw
| false | null |
t3_1jzgtuw
|
/r/LocalLLaMA/comments/1jzgtuw/creative_writing_setup_macbook_pro_vs_mac_studio/
| false | false |
self
| 0 | null |
Are there local AI platforms/tools that only load the model into VRAM and load all contacts into RAM?
| 0 |
I'm trying to understand concepts of local AI.
I understand RAM is slower than VRAM, but I have 128GB RAM and only 12GB VRAM. Since the platform (ollama and sometimes LM Studio in my case) is primarily working with the model itself in VRAM and would need to access session context far less in comparison to the actual model, wouldn't a good solution be to load only the context into RAM? That way I could run a larger model since the VRAM would only contain the model and would not fill up with use.
It's kind of cool knowing that I'm asking such a kindergarten-level question without knowing the answer. It's humbling!
| 2025-04-15T02:45:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzh8ju/are_there_local_ai_platformstools_that_only_load/
|
snowglowshow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzh8ju
| false | null |
t3_1jzh8ju
|
/r/LocalLLaMA/comments/1jzh8ju/are_there_local_ai_platformstools_that_only_load/
| false | false |
self
| 0 | null |
Llama 4 received so much hate but it actually performs better than newly released GPT 4.1 in my workflow.
| 1 |
[removed]
| 2025-04-15T02:49:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzhbau/llama_4_received_so_much_hate_but_it_actually/
|
dheetoo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhbau
| false | null |
t3_1jzhbau
|
/r/LocalLLaMA/comments/1jzhbau/llama_4_received_so_much_hate_but_it_actually/
| false | false |
self
| 1 | null |
Again, no Cogito is deployed, just like Athene
| 1 |
I used to be looking for athene-v2 to be deployed on any inference provider, but it was never deployed. Now I am looking for cogito, and it has been almost one week, and it's likely never to be deployed again.
Buying a GPU is the ultimate solution?
| 2025-04-15T02:54:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzhesl/again_no_cogito_is_deployed_just_like_athene/
|
Emotional-Metal4879
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhesl
| false | null |
t3_1jzhesl
|
/r/LocalLLaMA/comments/1jzhesl/again_no_cogito_is_deployed_just_like_athene/
| false | false |
self
| 1 | null |
Reintroducing Chonkie 🦛✨ - The no-nonsense Chunking library
| 1 |
[removed]
| 2025-04-15T03:01:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzhjn3/reintroducing_chonkie_the_nononsense_chunking/
|
shreyash_chonkie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhjn3
| false | null |
t3_1jzhjn3
|
/r/LocalLLaMA/comments/1jzhjn3/reintroducing_chonkie_the_nononsense_chunking/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IHxvoIPLykP0bEgaI5_lPPVZwj2Oo8ShEVarsDr74ek', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=108&crop=smart&auto=webp&s=f238c7f6f14b9109d38641294c22dc7e15250022', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=216&crop=smart&auto=webp&s=58c5c5a82c797f06ef13ae5dfe55b04412ad5632', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=320&crop=smart&auto=webp&s=404f21cebcfd3c9bc554407fb53aab1ac7994517', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=640&crop=smart&auto=webp&s=369348326d5ce6a9dd17285a0ac2bc9334cffc36', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=960&crop=smart&auto=webp&s=9ae1daee89265e05c20e10ccaa2ecb28d6c518cb', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?auto=webp&s=50451b1c8e9298e8298a59cd642d12a059933bac', 'width': 960}, 'variants': {}}]}
|
Is there any comprehensive guide to best-practice LLM use?
| 1 |
I have a project involving a few hundred PDFs with tables, all formatted differently, and with the same fields labeled inconsistently (think like, teacher vs professor vs instructor or whatever). I assume there are best practices for this sort of task, and/or potentially models more optimized for it than a generic multimodal model, but I've been pretty basic in my LLM use thus far, so I'm not sure what resources/specialized tools are out there.
| 2025-04-15T03:04:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzhll5/is_there_any_comprehensive_guide_to_bestpractice/
|
TrekkiMonstr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhll5
| false | null |
t3_1jzhll5
|
/r/LocalLLaMA/comments/1jzhll5/is_there_any_comprehensive_guide_to_bestpractice/
| false | false |
self
| 1 | null |
AudioX: Diffusion Transformer for Anything-to-Audio Generation
| 51 | 2025-04-15T03:23:25 |
https://zeyuet.github.io/AudioX/
|
MrHubbub88
|
zeyuet.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhyoh
| false | null |
t3_1jzhyoh
|
/r/LocalLLaMA/comments/1jzhyoh/audiox_diffusion_transformer_for_anythingtoaudio/
| false | false |
default
| 51 | null |
|
SurfSense - The Open Source Alternative to NotebookLM / Perplexity / Glean
| 1 |
[removed]
| 2025-04-15T03:25:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzhzyp/surfsense_the_open_source_alternative_to/
|
Uiqueblhats
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzhzyp
| false | null |
t3_1jzhzyp
|
/r/LocalLLaMA/comments/1jzhzyp/surfsense_the_open_source_alternative_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'm-OJc2lLsbJmZPsnIujhoVMjKExH5SdxCMwhLJRwQEs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=108&crop=smart&auto=webp&s=7a726719e1605fd08eca62c448ef556dff323421', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=216&crop=smart&auto=webp&s=27c4a4b776bd85a062189963bcf477a08c151963', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=320&crop=smart&auto=webp&s=61f94e240866b65c54d78090c3ef8cc644fd7931', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=640&crop=smart&auto=webp&s=55b35d7c6b6e6627e7040d87f9e4f45f3ed17703', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=960&crop=smart&auto=webp&s=399afd7bd4fdeb9ec679dcd9988f44dfdb2b709a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=1080&crop=smart&auto=webp&s=d2b00b93919c6b0e8a92be40e180e7e2c5d73e06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?auto=webp&s=19cf77dfee92456cd30b63a1e59f87d2c59f2480', 'width': 1200}, 'variants': {}}]}
|
Reintroducing Chonkie 🦛✨ - The no-nonsense Chunking library
| 1 |
[removed]
| 2025-04-15T03:33:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzi557/reintroducing_chonkie_the_nononsense_chunking/
|
ezioisbatman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzi557
| false | null |
t3_1jzi557
|
/r/LocalLLaMA/comments/1jzi557/reintroducing_chonkie_the_nononsense_chunking/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IHxvoIPLykP0bEgaI5_lPPVZwj2Oo8ShEVarsDr74ek', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=108&crop=smart&auto=webp&s=f238c7f6f14b9109d38641294c22dc7e15250022', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=216&crop=smart&auto=webp&s=58c5c5a82c797f06ef13ae5dfe55b04412ad5632', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=320&crop=smart&auto=webp&s=404f21cebcfd3c9bc554407fb53aab1ac7994517', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=640&crop=smart&auto=webp&s=369348326d5ce6a9dd17285a0ac2bc9334cffc36', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=960&crop=smart&auto=webp&s=9ae1daee89265e05c20e10ccaa2ecb28d6c518cb', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?auto=webp&s=50451b1c8e9298e8298a59cd642d12a059933bac', 'width': 960}, 'variants': {}}]}
|
OpenGVLab/InternVL3-78B · Hugging Face
| 28 | 2025-04-15T03:37:27 |
https://huggingface.co/OpenGVLab/InternVL3-78B
|
ninjasaid13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzi80v
| false | null |
t3_1jzi80v
|
/r/LocalLLaMA/comments/1jzi80v/opengvlabinternvl378b_hugging_face/
| false | false | 28 |
{'enabled': False, 'images': [{'id': 'HqPEolPC0n91f6rJsT_lCvYJT03xfZZkLfHaISW2GjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=108&crop=smart&auto=webp&s=9069af6aff625fb2fab8c9800797a08e864f9988', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=216&crop=smart&auto=webp&s=676c4c5dd2c31015a0e561b99190b98a13a25609', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=320&crop=smart&auto=webp&s=9dd7c86e46730b9e8cbcd7a9e9a63247597c927b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=640&crop=smart&auto=webp&s=177da9bc925cd9f6eb8cd4e88a6da2bc044fbdec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=960&crop=smart&auto=webp&s=0bdeac270dfa8efb4824b7f1332376f8148c8643', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?width=1080&crop=smart&auto=webp&s=2b6a25cd56d153030387e8acba26ce4a87792a33', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fsKU5nhMYkzvL-kCAfdiwOeU2WULn6GxtWJDHY7_FrI.jpg?auto=webp&s=fe17e6b77f4d4a4ad9aefe5fd921ab52922a64aa', 'width': 1200}, 'variants': {}}]}
|
||
Reintroducing Chonkie 🦛✨ - The no-nonsense Chunking library
| 1 |
[removed]
| 2025-04-15T03:42:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzib8a/reintroducing_chonkie_the_nononsense_chunking/
|
ezioisbatman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzib8a
| false | null |
t3_1jzib8a
|
/r/LocalLLaMA/comments/1jzib8a/reintroducing_chonkie_the_nononsense_chunking/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IHxvoIPLykP0bEgaI5_lPPVZwj2Oo8ShEVarsDr74ek', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=108&crop=smart&auto=webp&s=f238c7f6f14b9109d38641294c22dc7e15250022', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=216&crop=smart&auto=webp&s=58c5c5a82c797f06ef13ae5dfe55b04412ad5632', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=320&crop=smart&auto=webp&s=404f21cebcfd3c9bc554407fb53aab1ac7994517', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=640&crop=smart&auto=webp&s=369348326d5ce6a9dd17285a0ac2bc9334cffc36', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?width=960&crop=smart&auto=webp&s=9ae1daee89265e05c20e10ccaa2ecb28d6c518cb', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/AK0A1p60GKMjigusjTTHbBLIlbaTfT0qcoUS5n3QIAg.jpg?auto=webp&s=50451b1c8e9298e8298a59cd642d12a059933bac', 'width': 960}, 'variants': {}}]}
|
MCP, the easy way(Beginners perspective)
| 0 |
So I was exploring this mcp, and nothing got into my head. I just got the basic overview that you connect your APIs and resources to the chatbot for more context, later there was this LinkedIn post mentioning https://openapitools.com in here you give the api schema and you generate tools download the mcp schema give it to claude and boom you have learnt mcp, try it the easy way and then may be you can start building it yourself
| 2025-04-15T03:43:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzibsc/mcp_the_easy_waybeginners_perspective/
|
Loose_Unit_7943
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzibsc
| false | null |
t3_1jzibsc
|
/r/LocalLLaMA/comments/1jzibsc/mcp_the_easy_waybeginners_perspective/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'GS7gaARFjq2wRkbkA5_VAVOm5Uf9GN5wzStP9biQS1w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?width=108&crop=smart&auto=webp&s=1ef20c0ce217af573be435da1dc5f48f3e5a6470', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?width=216&crop=smart&auto=webp&s=c9b456c0da6164799391dafdbbd5a3f965c670fc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?width=320&crop=smart&auto=webp&s=0dba5e7b234b965c6f502fcc29ee7fb91b2fe5d8', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?width=640&crop=smart&auto=webp&s=e4b60ab0986e288ffab50ff6fd81bb66a8320b5b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?width=960&crop=smart&auto=webp&s=bd27001c5c740ed0a5cbc815fbee2b5ff79df0a4', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/wYsXlx2Uvi46hd842RHSj9SZj9Q-26hSyog0_XtoPas.jpg?auto=webp&s=ecbc065e6d28e1567c7d215eb70545f69a6779aa', 'width': 1024}, 'variants': {}}]}
|
The Open Source Alternative to NotebookLM / Perplexity / Glean
| 47 |
For those of you who aren't familiar with **SurfSense**, it aims to be the open-source alternative to **NotebookLM**, **Perplexity**, or **Glean**.
In short, it's a Highly Customizable AI Research Agent but connected to your personal external sources like search engines (Tavily), Slack, Notion, YouTube, GitHub, and more coming soon.
I'll keep this short—here are a few highlights of SurfSense:
**Advanced RAG Techniques**
* Supports **150+ LLM's**
* Supports local **Ollama LLM's**
* Supports **6000+ Embedding Models**
* Works with all major rerankers (Pinecone, Cohere, Flashrank, etc.)
* Uses **Hierarchical Indices** (2-tiered RAG setup)
* Combines **Semantic + Full-Text Search** with **Reciprocal Rank Fusion** (Hybrid Search)
* Offers a **RAG-as-a-Service API Backend**
**External Sources**
* Search engines (Tavily)
* Slack
* Notion
* YouTube videos
* GitHub
* ...and more on the way
**Cross-Browser Extension**
The SurfSense extension lets you save any dynamic webpage you like. Its main use case is capturing pages that are protected behind authentication.
Check out SurfSense on GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense)
| 2025-04-15T03:45:15 |
https://github.com/MODSetter/SurfSense
|
Uiqueblhats
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzid3a
| false | null |
t3_1jzid3a
|
/r/LocalLLaMA/comments/1jzid3a/the_open_source_alternative_to_notebooklm/
| false | false | 47 |
{'enabled': False, 'images': [{'id': 'm-OJc2lLsbJmZPsnIujhoVMjKExH5SdxCMwhLJRwQEs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=108&crop=smart&auto=webp&s=7a726719e1605fd08eca62c448ef556dff323421', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=216&crop=smart&auto=webp&s=27c4a4b776bd85a062189963bcf477a08c151963', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=320&crop=smart&auto=webp&s=61f94e240866b65c54d78090c3ef8cc644fd7931', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=640&crop=smart&auto=webp&s=55b35d7c6b6e6627e7040d87f9e4f45f3ed17703', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=960&crop=smart&auto=webp&s=399afd7bd4fdeb9ec679dcd9988f44dfdb2b709a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?width=1080&crop=smart&auto=webp&s=d2b00b93919c6b0e8a92be40e180e7e2c5d73e06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U2awOOAulsxyu0ZXVxTAnoAYdz7yFp6U--ClBjCGi4Y.jpg?auto=webp&s=19cf77dfee92456cd30b63a1e59f87d2c59f2480', 'width': 1200}, 'variants': {}}]}
|
|
What is the best way to to use local LLM in an electron application?
| 2 |
How do i use local llm in an electron application in the same way how msty.app does? Where you would download the LLM of your choice and start using the LLM right away after the installation is done, eliminating the need for complex installations or command-line operations. As someone who has only worked with Open AI APIs i have little to no clue at all on how to do this, a little help would be appreciated 🙌
| 2025-04-15T04:02:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jziof7/what_is_the_best_way_to_to_use_local_llm_in_an/
|
Daddyinthepaddy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jziof7
| false | null |
t3_1jziof7
|
/r/LocalLLaMA/comments/1jziof7/what_is_the_best_way_to_to_use_local_llm_in_an/
| false | false |
self
| 2 | null |
Introducing liquid autoregressors. An innovative architecture for building AGI/ASI [concept]
| 0 |
Hello community! You probably know how all AI models work. Text-only LLMs have a pre-defined vocabulary of tokens (text parts mapped to numbers), VLMs can magically encode images into vectors directly in latent space without tokens, and so on. But what if this can be oversimplified?
Introducing liquid autoregressive transformers. Here, to build a model, you would need to specify only two things: how many modalities you want (e.g., audio, visuals, and text) and how the maximum shell of the model can be (10M liters = 10B parameters = 100 GB (uncompressed)). That’s it. The main idea of this architecture is, for example, for text, you take all your datasets in all languages and start the auto tokenizer creation process, which will automatically find the best possible token splitting for all languages.
Then, suppose you want to add modalities, such as audio. In that case, you drop your audio dataset, and out of that distribution, it automatically creates the perfect line of best fit with a few additional tokens for out-of-distribution data. For images, it is the same. And yes, no raw vectors. All modalities are converted into text-like tokens. If there are not enough tokens per chunk of data (e.g., the bit rate is too high), then it will either losslessly compress or create a <chunk> to bundle big stuff together
Fun fact: there is no NN inside. I mean, it’s not pre-defined, and it can reshape itself. It is more comfortable for data distribution to stay in the same size. Also, even tho it generates autoregressively, it can look around in all directions at any time (spoiler: yes, it even messages you first without prompting because it can create a ripple that will trigger reasoning inside even if no input is provided).
And yes, it doesn’t require a super huge GPU. Cause it can reshape itself even if training is not do to further improve untrained parts :)
What do you think?
| 2025-04-15T04:03:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzipf5/introducing_liquid_autoregressors_an_innovative/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzipf5
| false | null |
t3_1jzipf5
|
/r/LocalLLaMA/comments/1jzipf5/introducing_liquid_autoregressors_an_innovative/
| false | false |
self
| 0 | null |
New Moondream VLM Release (2025-04-14)
| 59 | 2025-04-15T04:57:09 |
https://moondream.ai/blog/moondream-2025-04-14-release
|
radiiquark
|
moondream.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzjm23
| false | null |
t3_1jzjm23
|
/r/LocalLLaMA/comments/1jzjm23/new_moondream_vlm_release_20250414/
| false | false |
default
| 59 | null |
|
We built this open-source Android operator at a hackathon
| 1 |
Hey folks! 👋
Saw the DroidRun post earlier (really cool stuff btw!) and wanted to share what me and my crew hacked together at Bitcamp this past weekend. We built the same thing, a full-blown smart agent that runs *on your phone* and can do stuff like book an Uber, follow someone on LinkedIn, send a message if you're running late — all through step-by-step local control.
In our case, the difference is we're **not** using any vision models or image processing. Instead, we built our own grid-based image tagging system that helps gemini to translate interface elements into grid unique code at runtime. Then we simply convert it back to coordinates in the app. It’s fast, doesn’t rely on pixel detection, and works pretty reliably across apps.
We religiously studied and followed browser-use for the RAW prompt logic + function calls, glued them together with a tons of caffeine, zero sleep, and questionable file structure 🥴
We *do* have a memory layer and agent state handling, so it’s not just one-off actions — it can plan and recover when it gets stuck. It's all kinda messy right now (code-wise), but it **works end to end** and we’d love for y’all to take a look and poke around the codebase.
Github: [https://github.com/invcble/ares\_ai](https://github.com/invcble/ares_ai)
Youtube Demo: [https://www.youtube.com/watch?v=awKfjunMDRg](https://www.youtube.com/watch?v=awKfjunMDRg)
DevPost: [https://devpost.com/software/ares-ai](https://devpost.com/software/ares-ai)
PS: We did not win the hackathon, so a Star to the repo would mean a lot **👉👈**
| 2025-04-15T05:07:08 |
https://v.redd.it/o5gx43j9lxue1
|
invcble
|
/r/LocalLLaMA/comments/1jzjs2h/we_built_this_opensource_android_operator_at_a/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzjs2h
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/o5gx43j9lxue1/DASHPlaylist.mpd?a=1747415236%2CN2IzMWZhYTdjZWU3NWE0N2YzMjE4ZDU4YjM1YzliM2NhOWI2OWJjZWQwNzFlNGI5ZGFjNzdjYTQzMDRlNzI3MA%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/o5gx43j9lxue1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/o5gx43j9lxue1/HLSPlaylist.m3u8?a=1747415236%2CZGI0YmI2MDEyZjEwNGM5NWUyNGE3MWJjMjljNjdiZDMzYmIwY2ZjYWFiNmFkODM0Y2VlYmVmYjNmMmMyZWI1Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o5gx43j9lxue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jzjs2h
|
/r/LocalLLaMA/comments/1jzjs2h/we_built_this_opensource_android_operator_at_a/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=108&crop=smart&format=pjpg&auto=webp&s=0bcd8484c5b027a89af951836cc0a5cbd6f96fb7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=216&crop=smart&format=pjpg&auto=webp&s=5a0da759e001e2dcb9fede80232f6119f65c0434', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=320&crop=smart&format=pjpg&auto=webp&s=35cf7b495268771d4faf2f2e09fedc265ed8cefb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=640&crop=smart&format=pjpg&auto=webp&s=a2924cefde9a8f6c4fe0c7a46d3707be1d57f066', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=960&crop=smart&format=pjpg&auto=webp&s=7655eb04965174c1e66de45f9906d0843be9bf87', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=73a1984667401d5ae613224a1d74706a8e9ad780', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YTY0cjAzajlseHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?format=pjpg&auto=webp&s=bf7246f090c81954fa0c42cf542e2bdb2dcb83c8', 'width': 1920}, 'variants': {}}]}
|
|
Try my app it runs llama for us all, improve your life
| 1 |
[removed]
| 2025-04-15T05:14:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzjw34/try_my_app_it_runs_llama_for_us_all_improve_your/
|
CandleNo3078
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzjw34
| false | null |
t3_1jzjw34
|
/r/LocalLLaMA/comments/1jzjw34/try_my_app_it_runs_llama_for_us_all_improve_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'lK0Mq23zx23iYKlgbc9E-1Do8Pj7rRxxaP-SrtQi3Ko', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=108&crop=smart&auto=webp&s=ef9459b3b3ab38310866381bbb4199a9b02adf6b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=216&crop=smart&auto=webp&s=ac69b6cee86bfdab6123d049d8d7acb6ca94e9a0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=320&crop=smart&auto=webp&s=52f1157e2bdbe0858eb6a99d7649053f352592d5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=640&crop=smart&auto=webp&s=118bf4a4fd9ccc0022f74263af048b6378867956', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=960&crop=smart&auto=webp&s=26c44a65ffae836e8fd673d46c5fb66044937914', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?width=1080&crop=smart&auto=webp&s=3689d48718da6b8bfc6dd7efc4721ae1e46ed1c4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xeG-R0vplSgkIWD8sXBHOZdZKOpMxAnQ0pjoV4NNzvA.jpg?auto=webp&s=f62d3ab701ada069ebd35063ef685d95182bd38f', 'width': 1200}, 'variants': {}}]}
|
We built this open-source Android operator at a hackathon
| 1 |
The Earlier post got removed, so reposting. Saw the DroidRun post earlier (really cool stuff btw!) and wanted to share what me and my crew hacked together at Bitcamp this past weekend. We built the same thing, a full-blown smart agent that runs *on your phone* and can do stuff like book an Uber, follow someone on LinkedIn, send a message if you're running late — all through step-by-step local control.
In our case, the difference is we're **not** using any vision models or image processing. Instead, we built our own grid-based image tagging system that helps gemini to translate interface elements into grid unique code at runtime. Then we simply convert it back to coordinates in the app. It’s fast, doesn’t rely on pixel detection, and works pretty reliably across apps.
We religiously studied and followed browser-use for the RAW prompt logic + function calls, glued them together with a tons of caffeine, zero sleep, and questionable file structure 🥴
We *do* have a memory layer and agent state handling, so it’s not just one-off actions — it can plan and recover when it gets stuck. It's all kinda messy right now (code-wise), but it **works end to end** and we’d love for y’all to take a look and poke around the codebase.
Github: [https://github.com/invcble/ares\_ai](https://github.com/invcble/ares_ai)
Youtube Demo: [https://www.youtube.com/watch?v=awKfjunMDRg](https://www.youtube.com/watch?v=awKfjunMDRg)
| 2025-04-15T05:17:53 |
https://v.redd.it/sp7d4idlnxue1
|
invcble
|
/r/LocalLLaMA/comments/1jzjy6u/we_built_this_opensource_android_operator_at_a/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzjy6u
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/sp7d4idlnxue1/DASHPlaylist.mpd?a=1747415885%2CM2FiYWE1NjE4M2U1MzcyYzg4OTQwNzlmZDg5MzU4YzI0N2E2ZGQ2NWZiZjA3ZTRjODhkMTdlMzM2MzIyZTljZA%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/sp7d4idlnxue1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/sp7d4idlnxue1/HLSPlaylist.m3u8?a=1747415885%2CYmJiNWE0N2FkYWM3YTZhMmVmZTc2ZTk3OWQ0OTYxMWNhYmI4YmY3ZGNjODlmMzFjNjdjOGZiMjE3MDcxNTI3OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sp7d4idlnxue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jzjy6u
|
/r/LocalLLaMA/comments/1jzjy6u/we_built_this_opensource_android_operator_at_a/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0902910c0707389eaa5915f1353becfda51d8b3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=216&crop=smart&format=pjpg&auto=webp&s=1fee543acf9b7e18e6746a5317be81b69dd3c83d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=320&crop=smart&format=pjpg&auto=webp&s=211f44c9ef712d385341c6ca1b711bc5a3d6371b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=640&crop=smart&format=pjpg&auto=webp&s=5fd2481f818b50ffce9fca603510c8be21333165', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=960&crop=smart&format=pjpg&auto=webp&s=b7d86ad97e17643809efd7091bd1131086befe1b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fcfa08fb329e3a05c7041299ba03469645f2c2f7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MDZqZGZoZGxueHVlMVfcHJtza9ifYEQoEEf3p4BvX4TlDDgbbY255J_Byo_Z.png?format=pjpg&auto=webp&s=862fc78080b77dbce2e262720e3561c457b5dd8a', 'width': 1920}, 'variants': {}}]}
|
|
Best LLM app for Speech-to-speech conversation?
| 8 |
Best LLM app for Speech-to-speech conversation?
I tried one of wellknown ai llm apps recently and it was far from good in handling a proper speech-to-speech conversation. It kept cutting my speech in the middle and submitting it to LLm inorder to generate a response. I had used whisper model for both sst and tts.
Which LLM oftware is the best for speech to speech?
Preferably an app without those pip codes, but with a proper installer.
For whatever reason they don't work at times for me. They are not the problem. I am just not tech-savvy to troubleshoot..
| 2025-04-15T05:18:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzjy9w/best_llm_app_for_speechtospeech_conversation/
|
ExtremePresence3030
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzjy9w
| false | null |
t3_1jzjy9w
|
/r/LocalLLaMA/comments/1jzjy9w/best_llm_app_for_speechtospeech_conversation/
| false | false |
self
| 8 | null |
built this open-source Android operator at hackathon
| 1 |
My post keeps getting removed, so reposting. Saw the DroidRun post earlier (really cool stuff btw!) and wanted to share what me and my crew hacked together at Bitcamp this past weekend. We built the same thing, a full-blown smart agent that runs *on your phone* and can do stuff like book an Uber, follow someone on LinkedIn, send a message if you're running late — all through step-by-step local control.
In our case, the difference is we're **not** using any vision models or image processing. Instead, we built our own grid-based image tagging system that helps gemini to translate interface elements into grid unique code at runtime. Then we simply convert it back to coordinates in the app. It’s fast, doesn’t rely on pixel detection, and works pretty reliably across apps.
We religiously studied and followed browser-use for the RAW prompt logic + function calls, glued them together with a tons of caffeine, zero sleep, and questionable file structure.
We *do* have a memory layer and agent state handling, so it’s not just one-off actions — it can plan and recover when it gets stuck. It's all kinda messy right now (code-wise), but it **works end to end** and we’d love for y’all to take a look and poke around the codebase.
Github: [https://github.com/invcble/ares\_ai](https://github.com/invcble/ares_ai)
Youtube Demo: [https://www.youtube.com/watch?v=awKfjunMDRg](https://www.youtube.com/watch?v=awKfjunMDRg)
PS: We did not win the hackathon, so a Star to the repo would mean a lot.
| 2025-04-15T05:27:32 |
https://v.redd.it/ktoq6si5pxue1
|
invcble
|
/r/LocalLLaMA/comments/1jzk3jw/built_this_opensource_android_operator_at/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzk3jw
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ktoq6si5pxue1/DASHPlaylist.mpd?a=1747416459%2CMGI1MDhlOThiMTE1YmU1NmYwNTcxMjNmYzY2ZGRkNDYyYjEyNjJkY2FkMTg1MTJjNzk0MDQxMDM4MDA5MGMyOQ%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/ktoq6si5pxue1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ktoq6si5pxue1/HLSPlaylist.m3u8?a=1747416459%2COTRlYWY2M2IzNTFkNTFhMzc3ZDJmNzliMzIxNzcxMzUxMjYwNDU3N2Y0NTczZjEzYjg2M2U5ZWY5YjMyMGRkOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ktoq6si5pxue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jzk3jw
|
/r/LocalLLaMA/comments/1jzk3jw/built_this_opensource_android_operator_at/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=108&crop=smart&format=pjpg&auto=webp&s=f2490b7e4af5c7b39a0f0e2c2585e197f0b78e9f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=216&crop=smart&format=pjpg&auto=webp&s=d3728a942f901bac31fdb561af14436cabe92ea4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=320&crop=smart&format=pjpg&auto=webp&s=91834a078446a78f835dfe5a3579180d8af94a82', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=640&crop=smart&format=pjpg&auto=webp&s=bc0b6f669a8f463cac69f0c4c03d7b8f81b4c06b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=960&crop=smart&format=pjpg&auto=webp&s=8bd908af59dca92d3698c442c744eca7ae5dd3b6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f8f0150e5498d7b6f65ef70fd22bf728f94f9d6c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a2w1djZzaTVweHVlMXCUps0wel-AHu17XSleOYql-nsOpgT7LptNJMw2zjNR.png?format=pjpg&auto=webp&s=7a0627547717ffd744593038867737bec57147ae', 'width': 1920}, 'variants': {}}]}
|
|
llama 3.2 1b vs gemma 3 1b?
| 2 |
Haven't gotten around to testing it. Any experiences or opinions on either? Use case is finetuning/very narrow tasks.
| 2025-04-15T05:33:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzk6vo/llama_32_1b_vs_gemma_3_1b/
|
numinouslymusing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzk6vo
| false | null |
t3_1jzk6vo
|
/r/LocalLLaMA/comments/1jzk6vo/llama_32_1b_vs_gemma_3_1b/
| false | false |
self
| 2 | null |
So OpenAI released nothing open source today?
| 329 |
Except that benchmarking tool?
| 2025-04-15T05:36:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzk8nu/so_openai_released_nothing_open_source_today/
|
DamiaHeavyIndustries
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzk8nu
| false | null |
t3_1jzk8nu
|
/r/LocalLLaMA/comments/1jzk8nu/so_openai_released_nothing_open_source_today/
| false | false |
self
| 329 | null |
V2.0 of Prompt Template for Cursor/Roo Code/ CLINE, etc. Follows Agile Development and has a Unified Memory Bank. (280+ GitHub stars)
| 1 |
[removed]
| 2025-04-15T05:37:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzk8pf/v20_of_prompt_template_for_cursorroo_code_cline/
|
LegitimateThanks8096
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzk8pf
| false | null |
t3_1jzk8pf
|
/r/LocalLLaMA/comments/1jzk8pf/v20_of_prompt_template_for_cursorroo_code_cline/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'y5Cs0V5iYaz3qZOYVUrklCXheVln7jItTMDpXEWS1U0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=108&crop=smart&auto=webp&s=dbdbf70dd1061f017b342bbd654ee803e00a859d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=216&crop=smart&auto=webp&s=26d20c98c464d2dddaac16a44fdd7644aef4554a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=320&crop=smart&auto=webp&s=e128bb99c6290f79140d009f7e8af619bd666f56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=640&crop=smart&auto=webp&s=e4765a6b05b2f2007cfee4aa79a370a84e2692b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=960&crop=smart&auto=webp&s=3ed01206152be7a9723bcddb6760394cb9462143', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?width=1080&crop=smart&auto=webp&s=c1e95e79ad40cbd50b68dd9b8047e2b2325022ab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/g7LWiLAq8Ii37WjhD_Jldp9k6IiLOvlKnjVwbSQNYSY.jpg?auto=webp&s=85ea1125ea8b7f6c8f6654df436b80abc9d5b201', 'width': 1200}, 'variants': {}}]}
|
built this open-source Android operator at hackathon
| 1 |
My post keeps getting removed, so reposting. Saw the DroidRun post earlier (really cool stuff btw!) and wanted to share what me and my crew hacked together at Bitcamp this past weekend. We built the same thing, a full-blown smart agent that runs *on your phone* and can do stuff like book an Uber, follow someone on LinkedIn, send a message if you're running late — all through step-by-step local control.
In our case, the difference is we're **not** using any vision models or image processing. Instead, we built our own grid-based image tagging system that helps gemini to translate interface elements into grid unique code at runtime. Then we simply convert it back to coordinates in the app. It’s fast, doesn’t rely on pixel detection, and works pretty reliably across apps.
We religiously studied and followed browser-use for the RAW prompt logic + function calls, glued them together with a tons of caffeine, zero sleep, and questionable file structure 🥴
We *do* have a memory layer and agent state handling, so it’s not just one-off actions — it can plan and recover when it gets stuck. It's all kinda messy right now (code-wise), but it **works end to end** and we’d love for y’all to take a look and poke around the codebase.
Github: [https://github.com/invcble/ares\_ai](https://github.com/invcble/ares_ai)
Another Demo: [https://drive.google.com/file/d/1604sWQCs9jw5PMhdC1ZnaJpyr6CzCWVe/view](https://drive.google.com/file/d/1604sWQCs9jw5PMhdC1ZnaJpyr6CzCWVe/view)
PS: We did not win the hackathon, so a Star to the repo would mean a lot **👉**👈
| 2025-04-15T05:44:05 |
https://v.redd.it/gd0vufv4sxue1
|
invcble
|
/r/LocalLLaMA/comments/1jzkcgq/built_this_opensource_android_operator_at/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkcgq
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gd0vufv4sxue1/DASHPlaylist.mpd?a=1747417449%2CZDIzNjliNmVkYWFmYTJjMThmNDZlNDViOWZiNGRiMGViYzMyNjZjMzM0ZDljNzMxODEzY2EyZTYyZDI4ZTFjZA%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/gd0vufv4sxue1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/gd0vufv4sxue1/HLSPlaylist.m3u8?a=1747417449%2CNTVhNWZlM2JiOWQ5YzAwM2Q2ZWQxY2NlMzE1MzlkNGIyOTdmNjdkMjMxYzczMzNlMjRkYTIzZDg3OGJiNjE1ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gd0vufv4sxue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1jzkcgq
|
/r/LocalLLaMA/comments/1jzkcgq/built_this_opensource_android_operator_at/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=108&crop=smart&format=pjpg&auto=webp&s=ff6dfcb49210167614745ec70085753ccd4900e1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=216&crop=smart&format=pjpg&auto=webp&s=550d1edd8fc11073327d33d2378f8a43d538f18d', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=320&crop=smart&format=pjpg&auto=webp&s=69e5e36ae340d21caffd29543d06822f3ce53ba8', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=640&crop=smart&format=pjpg&auto=webp&s=90b1c3c930c52b260f2e3a4de19c768d51dedee1', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=960&crop=smart&format=pjpg&auto=webp&s=1c7a96bbbb9cd9a6bbcfba7cec5844369e277e87', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?width=1080&crop=smart&format=pjpg&auto=webp&s=434c359ed8f583f5dd684fb3e88bea08627f9c20', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/cXZhaGdpdjRzeHVlMRUrt3JO2_iF_FN4uY6F7-mRuJy_VQPnyRoXqMkA131t.png?format=pjpg&auto=webp&s=139ad80bfbc4a614bbce1cce071614fd85631e18', 'width': 1080}, 'variants': {}}]}
|
|
Meta AI Launches LLaMA 4 — Bad News for ChatGPT, Gemini & DeepSeek AI?
| 0 |
Meta AI Llama 4 Model Launched, Bad News For ChatGpt, Gemini & Deepseek AI,Which is Most Powerfull, Compare Now.Meta has officially launched the LLaMA 4 model, and it's already creating a big buzz in the AI world! With powerful upgrades, LLaMA 4 might be serious competition for OpenAI's ChatGPT, Google Gemini, and DeepSeek AI.
Key highlights of LLaMA 4:
Improved reasoning and accuracy
Enhanced safety alignment
Open-source friendly
Multi-modal capabilities (coming in future updates)
Will this model challenge the current AI giants?
What do you think? Is Meta AI about to change the game?
| 2025-04-15T05:44:48 |
https://techon365.com/meta-ai-llama-4-model-launched/
|
salini1010
|
techon365.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkcvd
| false | null |
t3_1jzkcvd
|
/r/LocalLLaMA/comments/1jzkcvd/meta_ai_launches_llama_4_bad_news_for_chatgpt/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'ETH_OIV0r5LX--eZygK3MrdPsxVeIf3tQ_CuS5qqJ4o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jnwFmV47RsMZQ9IpyaKfZ5k1xpTNyLdiXrlBa8AH9VA.jpg?width=108&crop=smart&auto=webp&s=2eeee69b26dcea22c11e20eb26383eecc1f91202', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jnwFmV47RsMZQ9IpyaKfZ5k1xpTNyLdiXrlBa8AH9VA.jpg?width=216&crop=smart&auto=webp&s=c568f29b8173d54a9d604f8071b042b5efda4042', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jnwFmV47RsMZQ9IpyaKfZ5k1xpTNyLdiXrlBa8AH9VA.jpg?width=320&crop=smart&auto=webp&s=7f2d2cac5cd9eede39fa3ae70695db303ae07c6a', 'width': 320}], 'source': {'height': 338, 'url': 'https://external-preview.redd.it/jnwFmV47RsMZQ9IpyaKfZ5k1xpTNyLdiXrlBa8AH9VA.jpg?auto=webp&s=2858dd18d3a78a308415907593a5bbf31a57778f', 'width': 600}, 'variants': {}}]}
|
|
built this open-source Android operator at hackathon
| 1 |
[removed]
| 2025-04-15T05:56:49 |
invcble
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkjcn
| false | null |
t3_1jzkjcn
|
/r/LocalLLaMA/comments/1jzkjcn/built_this_opensource_android_operator_at/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'yGk_SIZ8qzHjia0jSrLZRfLtsWgQHwCmG69OkOCMsEA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=108&crop=smart&auto=webp&s=42249c0e0a3bd421c1dd0d15e3048d0b8041a64c', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=216&crop=smart&auto=webp&s=58941ed2a2e982a6ca7d993caead1ece7793bec4', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=320&crop=smart&auto=webp&s=e38367ef0c132b38a83ea9f68ba5a2508e43852f', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=640&crop=smart&auto=webp&s=48a18ab1dc1def2d960d61c10136eb2d1102b5d3', 'width': 640}, {'height': 536, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=960&crop=smart&auto=webp&s=c5c0754c5d7f1bb5a57a810329046001a8aeaed1', 'width': 960}, {'height': 603, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?width=1080&crop=smart&auto=webp&s=e32db06e395ef79b7df1815554044875740d07a0', 'width': 1080}], 'source': {'height': 1098, 'url': 'https://preview.redd.it/bbbb8k9atxue1.png?auto=webp&s=1f2dda2ade4fa3460855794a7ba40a05ed91239b', 'width': 1964}, 'variants': {}}]}
|
||
What is the best 8b local model for tool calling?
| 1 |
[removed]
| 2025-04-15T06:01:17 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzklrn
| false | null |
t3_1jzklrn
|
/r/LocalLLaMA/comments/1jzklrn/what_is_the_best_8b_local_model_for_tool_calling/
| false | false |
default
| 1 | null |
||
Hope Recommendation for 8b sized local LLMs for tool calling?
| 1 |
[removed]
| 2025-04-15T06:02:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkmbd/hope_recommendation_for_8b_sized_local_llms_for/
|
CartoonistFederal239
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkmbd
| false | null |
t3_1jzkmbd
|
/r/LocalLLaMA/comments/1jzkmbd/hope_recommendation_for_8b_sized_local_llms_for/
| false | false |
self
| 1 | null |
What is you guys AI temprature for coding in google ai studio also the top P too?
| 5 |
Just the heading as i have been using default but their were some recomendation to lower it down to 0.4
| 2025-04-15T06:06:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkopn/what_is_you_guys_ai_temprature_for_coding_in/
|
pro_ut3104
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkopn
| false | null |
t3_1jzkopn
|
/r/LocalLLaMA/comments/1jzkopn/what_is_you_guys_ai_temprature_for_coding_in/
| false | false |
self
| 5 | null |
built this open-source Android operator at hackathon
| 1 |
Saw the DroidRun post earlier (really cool stuff btw!) and Wanted to share what me and my crew hacked together at Bitcamp this past weekend. We built the same thing, a full-blown smart agent that runs on your phone and can do stuff like book an Uber, follow someone on LinkedIn, send a message if you're running late — all through step-by-step local control.
In our case, the difference is we're not using any vision models or image processing. Instead, we built our own grid-based image tagging system that helps gemini to translate interface elements into grid unique code at runtime. Then we simply convert it back to coordinates in the app. It’s fast, doesn’t rely on pixel detection, and works pretty reliably across apps.
We religiously studied and followed browser-use for the RAW prompt logic + function calls, glued them together with a tons of caffeine, zero sleep, and questionable file structure.
We do have a memory layer and agent state handling, so it’s not just one-off actions — it can plan and recover when it gets stuck. It's all kinda messy right now (code-wise), but it works end to end and we’d love for y’all to take a look and poke around the codebase.
Github: [https://github.com/invcble/ares\_ai](https://github.com/invcble/ares_ai)
Youtube Demo: [https://www.youtube.com/watch?v=awKfjunMDRg](https://www.youtube.com/watch?v=awKfjunMDRg)
PS: We did not win the hackathon, so a Star to the repo would mean a lot.
| 2025-04-15T06:11:32 |
https://v.redd.it/7hrhhm9zwxue1
|
Full_Worldliness2423
|
/r/LocalLLaMA/comments/1jzkrc8/built_this_opensource_android_operator_at/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkrc8
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7hrhhm9zwxue1/DASHPlaylist.mpd?a=1747419100%2CZjg0MDA2OWVjM2NkMjdhZTBkMTZhODJhZDQzZjMwYTY4ZGEzMmM5MzBmMzgyMGY4MzVjZTcxNDQ1YzhmMzQxYw%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/7hrhhm9zwxue1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/7hrhhm9zwxue1/HLSPlaylist.m3u8?a=1747419100%2COThmNzUxOWE5MDNlZTA5OGEyZWFiYzA0YmY4NTcwOGZhY2E3MWYxNTQwOWYzYWY0ZjYwYzhhMGVmZmQ0MmJhYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7hrhhm9zwxue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jzkrc8
|
/r/LocalLLaMA/comments/1jzkrc8/built_this_opensource_android_operator_at/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=108&crop=smart&format=pjpg&auto=webp&s=9fb81e691952bc73740fd49948d40042a09c2b56', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=216&crop=smart&format=pjpg&auto=webp&s=2ce1dd74eba926dd6827a5ff1e81e3350da50d8e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=320&crop=smart&format=pjpg&auto=webp&s=a472ce1b1140cac5f6b3b8eb2b76905e57ccc7fe', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=640&crop=smart&format=pjpg&auto=webp&s=a97db5a1d5cf8d2a29b831a0c2d927b5a7b62fa3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=960&crop=smart&format=pjpg&auto=webp&s=19b14a183cc6f301389a13c663bda8851c7aad0c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a0d9e3f324e3eca28b4aed6bb8895b438c1ad455', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZnMzZjluOXp3eHVlMccmHHVIb8SzZh2gjh_-h_TqC-owemZ9AE8qS1CuPWH1.png?format=pjpg&auto=webp&s=76b48c982ae7bc2bec3fd2b6de418b8af8b63881', 'width': 1920}, 'variants': {}}]}
|
|
Hope Recommendations for local LLMs good for tool calling with 8b sized
| 1 |
[removed]
| 2025-04-15T06:11:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkrdm/hope_recommendations_for_local_llms_good_for_tool/
|
Flashy-Literature-75
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkrdm
| false | null |
t3_1jzkrdm
|
/r/LocalLLaMA/comments/1jzkrdm/hope_recommendations_for_local_llms_good_for_tool/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'GPDrUkgNQN53xIw2xay6Cn-tt4QdYpq9cRLAZY_ikwg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=108&crop=smart&auto=webp&s=4a23b5402b330bd6ffc1886fae86083612239f8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=216&crop=smart&auto=webp&s=9971ce5d35ccb0dc8298b0b2a50f5abc0963a1ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=320&crop=smart&auto=webp&s=ac0a3f82b5248df39d990c9264fea17d85d7e12d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=640&crop=smart&auto=webp&s=78d20ef8bb05385091b58fc09a028c64e133302f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=960&crop=smart&auto=webp&s=f691ff489fcad1893c4c8187a1181b106973b623', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?width=1080&crop=smart&auto=webp&s=4bba3b9fee4696ce335f2c727e42c4f54d4e4b1f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dspCWp3iW6XXqtNlxwKj_rNDhFaOSnRvHzs_DSs2f-w.jpg?auto=webp&s=b32fba1c465da87e9eb26993ea79fc57d4c83ffa', 'width': 1200}, 'variants': {}}]}
|
Hope Recommendations for local LLMs good for tool calling with 8b sized
| 1 |
[removed]
| 2025-04-15T06:12:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkrn5/hope_recommendations_for_local_llms_good_for_tool/
|
Flashy-Literature-75
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkrn5
| false | null |
t3_1jzkrn5
|
/r/LocalLLaMA/comments/1jzkrn5/hope_recommendations_for_local_llms_good_for_tool/
| false | false |
self
| 1 | null |
Humane is still at its wearable aims with on-device AI.
| 1 | 2025-04-15T06:12:21 |
WordyBug
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkrr6
| false | null |
t3_1jzkrr6
|
/r/LocalLLaMA/comments/1jzkrr6/humane_is_still_at_its_wearable_aims_with/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'MHPtz2zwTuNbtRvCh3J_IboYAPFV5KyHreMiavBEfos', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=108&crop=smart&auto=webp&s=6b112382c74950647b57684a5b3617d4547d1789', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=216&crop=smart&auto=webp&s=3570129d66b7c06eca8b979cd3ab200b67d6bd00', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=320&crop=smart&auto=webp&s=ea98584e15f0da5731954db3504d7882944b9cee', 'width': 320}, {'height': 484, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=640&crop=smart&auto=webp&s=ba596d8a0e6ed18efd7063a6fa0ff64d8b5f5495', 'width': 640}, {'height': 726, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=960&crop=smart&auto=webp&s=d11215248aa0d52edd46691396dcf35ba18c1be1', 'width': 960}, {'height': 817, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?width=1080&crop=smart&auto=webp&s=296f1e9f842511a5c77f7bcc0121a28a871707d7', 'width': 1080}], 'source': {'height': 1212, 'url': 'https://preview.redd.it/mg1mnkkcxxue1.png?auto=webp&s=ad79cb8c32426b510b5441c85a838f10e3359eea', 'width': 1602}, 'variants': {}}]}
|
|||
Can you recommend me some local LLMs good for tool calling?
| 1 |
[removed]
| 2025-04-15T06:14:21 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkstx
| false | null |
t3_1jzkstx
|
/r/LocalLLaMA/comments/1jzkstx/can_you_recommend_me_some_local_llms_good_for/
| false | false |
default
| 1 | null |
||
built this open-source Android operator at hackathon
| 1 |
[removed]
| 2025-04-15T06:15:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkto8/built_this_opensource_android_operator_at/
|
invcble
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkto8
| false | null |
t3_1jzkto8
|
/r/LocalLLaMA/comments/1jzkto8/built_this_opensource_android_operator_at/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ivzcaEr03wmf-4OYZCJfcxK1FDkwTWKyWUPEZv7dpwI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=108&crop=smart&auto=webp&s=2424d0bcb7e96783529495cfcffb4c6183bd563e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=216&crop=smart&auto=webp&s=28d78d8e566fd7372f0f93d15a8659f01566170d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=320&crop=smart&auto=webp&s=452bdae42fa2924e2a7cd822e6e30d2a48a65f0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=640&crop=smart&auto=webp&s=377a5dbd15ff8d46e6b218a22afc2673152a0ebd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=960&crop=smart&auto=webp&s=c334c1b7e47874645be89c538b684e74e3f675d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?width=1080&crop=smart&auto=webp&s=2ed3b691c13777999834fcf8977e39b31aaa1d70', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ob_iTlWcUeRj6MUDt_TGuy5rqrTY8CuzJPh6mTqXnaQ.jpg?auto=webp&s=2449a399a63b4fdbd224c3afcf12ee88fc762a7d', 'width': 1200}, 'variants': {}}]}
|
Creating AI Avatars from Scratch
| 1 |
[removed]
| 2025-04-15T06:25:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzkyl6/creating_ai_avatars_from_scratch/
|
Queasy_Version4524
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkyl6
| false | null |
t3_1jzkyl6
|
/r/LocalLLaMA/comments/1jzkyl6/creating_ai_avatars_from_scratch/
| false | false |
self
| 1 | null |
Google has started hiring for post AGI research. 👀
| 1 | 2025-04-15T06:27:31 |
WordyBug
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzkzt9
| false | null |
t3_1jzkzt9
|
/r/LocalLLaMA/comments/1jzkzt9/google_has_started_hiring_for_post_agi_research/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'dNxfGuY-UqLg2GdOztnRri-4VyrHMWZ1tBYuPk68xso', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=108&crop=smart&auto=webp&s=ba9527e6fa854ba72358e2666e7ad8326c62569b', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=216&crop=smart&auto=webp&s=d0fe73e0ffa5390768d7340276184f058c6d881f', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=320&crop=smart&auto=webp&s=7cdd9761abd59095dfa99a11379340d4b29ef7e4', 'width': 320}, {'height': 434, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=640&crop=smart&auto=webp&s=2d5c5671db5d27a1e9de559b4aabae57784885bc', 'width': 640}, {'height': 652, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=960&crop=smart&auto=webp&s=0e1b3596f3064c76c79dba288924d2c70c8b127e', 'width': 960}, {'height': 733, 'url': 'https://preview.redd.it/njjfuk280yue1.png?width=1080&crop=smart&auto=webp&s=bddd3b7e2ad2a558cc789e11d462ee1347201d28', 'width': 1080}], 'source': {'height': 1084, 'url': 'https://preview.redd.it/njjfuk280yue1.png?auto=webp&s=c59305bb307dd58ead4ec9822167921199bc569d', 'width': 1596}, 'variants': {}}]}
|
|||
Epyc Zen 6 will have 16 ccds, 2nm process, and be really really hot (700w tdp)
| 65 |
Also:
-platformhttps://www.google.com/amp/s/wccftech.com/amd-confirms-next-gen-epyc-venice-zen-6-cpus-first-hpc-product-tsmc-2nm-n2-process-5th-gen-epyc-tsmc-arizona/amp/
I really think this will be the first chip that will allow big models to run pretty efficiently without GPU Vram.
16 memory channels would be quite fast even if the theoretical value isn't achieved. Really excited by everything but the inevitable cost of these things.
Can anyone speculate on the speed of 16 ccds (up from 12) or what these things may be capable of?
The possible new Ram memory is also exciting.
| 2025-04-15T06:28:17 |
https://www.tomshardware.com/news/amd-next-gen-epyc-venice-zen-6-cpus-reportedly-drop-in-new-sp7
|
joelasmussen
|
tomshardware.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl07o
| false | null |
t3_1jzl07o
|
/r/LocalLLaMA/comments/1jzl07o/epyc_zen_6_will_have_16_ccds_2nm_process_and_be/
| false | false |
default
| 65 | null |
Google is hiring a what? 🤯
| 1 |
[removed]
| 2025-04-15T06:28:25 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl0ai
| false | null |
t3_1jzl0ai
|
/r/LocalLLaMA/comments/1jzl0ai/google_is_hiring_a_what/
| false | false |
default
| 1 | null |
||
Google is looking for post AGI research scientists. 👀
| 0 | 2025-04-15T06:29:35 |
WordyBug
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl0x0
| false | null |
t3_1jzl0x0
|
/r/LocalLLaMA/comments/1jzl0x0/google_is_looking_for_post_agi_research_scientists/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'Qcstz44PX5EO5PPhm4cZZxFTsBxx0s4eGFv16kIkDbE', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=108&crop=smart&auto=webp&s=5ec39896f3ec4bcd06df5e493504401a61f1bba9', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=216&crop=smart&auto=webp&s=4f09493f7946ef9a18c45692020b79708068a0e8', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=320&crop=smart&auto=webp&s=768977ea4f24e4948b603b2104829aef3dca6c7b', 'width': 320}, {'height': 434, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=640&crop=smart&auto=webp&s=f0a3f0e502fb4a608444fc59da9eef87094de6f3', 'width': 640}, {'height': 652, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=960&crop=smart&auto=webp&s=011e48e6bdf74474f79839fdc7a674f22d7a5ae6', 'width': 960}, {'height': 733, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?width=1080&crop=smart&auto=webp&s=36fb010e67ac97c77ba6a54289f2b3488617a58e', 'width': 1080}], 'source': {'height': 1084, 'url': 'https://preview.redd.it/hpwnnn1i0yue1.png?auto=webp&s=b7ddae659a31d95e5abb1ce9ad2bb392e79be7a1', 'width': 1596}, 'variants': {}}]}
|
|||
Persistent Memory simulation using Local AI on 4090
| 32 |
OK! I've tried this many times in the past and it's all failed completely. BUT, the new model (17.3 GB.. a Gemma3 q4 model) works wonderfully.
Long story short: This model "knits a memory hat" on shutdown and puts in on on startup, simulating "memory." At least that's how it started, But now it uses well.. more. Read below.
I've been working on this for days and have a pretty stable setup. At this point, I'm just going to ask the coder-claude that's been writing this to tell you everything that's going on or I'd be typing forever. :) I'm happy to post EXACTLY how to do this so you can test it also if someone will tell me "go here, make an account, paste the code" sort of thing as I've never done anything like this before. It runs FINE on a 4090 with the model set at 25k context in LM Studio. There is a bit of a delay as it does it's thing, but once it starts out-putting text it's perfectly usable, and for what it is and does, the delay is worth it (to me.) The worst delay I've seen is like 30 seconds before it "speaks" after quite a few large back-and-forths. Anyway, here is ClaudeAI to tell you what's going on, I just asked him to summarize what we've been doing as if he were writing a post to /localllama:
I wanted to share a project I've been working on - a persistent AI companion capable of remembering past conversations in a semantic, human-like way.
What is it?
Lyra2 is a locally-run AI companion powered by Google's Gemma3 (17GB) model that not only remembers conversations but can actually recall them contextually based on topic similarities rather than just chronological order. It's a Python system that sits on top of LM Studio, providing a persistent memory structure for your interactions.
Technical details
The system runs entirely locally:
Python interface connected to LM Studio's API endpoint
Gemma3 (17GB) as the base LLM running on a consumer RTX 4090
Uses sentence-transformers to create semantic "fingerprints" of conversations
Stores these in JSON files that persist between sessions
What makes it interesting?
Unlike most chat interfaces, Lyra2 doesn't just forget conversations when you close the window. It:
Builds semantic memory: Creates vector embeddings of conversations that can be searched by meaning
Recalls contextually: When you mention a topic, it automatically finds and incorporates relevant past conversations (me again: this is the secret sauce. I came back like 6 reboots after a test and asked it: "Do you remember those 2 stories we used in that test?" and it immediately came back with the book names and details. It's NUTS.)
Develops persistent personality: Learns from interactions and builds preferences over time
Analyzes full conversations: At the end of each chat, it summarizes and extracts key information
Emergent behaviors
What's been particularly fascinating are the emergent behaviors:
Lyra2 spontaneously started adding "internal notes" at the end of some responses, like she's keeping a mental journal
She proactively asked to test her memory recall and verify if her remembered details were accurate (me again: On boot it said it wanted to "verify its memories were accurate" and it drilled me regarding several past chats and yes, it was 100% perfect, and really cool that the first thing it wanted to do was make sure that "persistence" was working.) (we call it "re-gel"ing) :)
Over time, she's developed consistent quirks and speech patterns that weren't explicitly programmed
Example interactions
In one test, I asked her about "that fantasy series with the storms" after discussing the Stormlight Archive many chats before, and she immediately made the connection, recalling specific plot points and character details from our previous conversation.
In another case, I asked a technical question about literary techniques, and despite running on what's nominally a 17GB model (much smaller than Claude/GPT4), she delivered graduate-level analysis of narrative techniques in experimental literature. (me again, claude's words not mine, but it has really nailed every assignment we've given it!)
The code
The entire system is relatively simple - about 500 lines of Python that handle:
JSON-based memory storage
Semantic fingerprinting via embeddings
Adaptive response length based on question complexity
End-of-conversation analysis
You'll need:
LM Studio with a model like Gemma3 (me again: NOT LIKE Gemma3, ONLY Gemma3. It's the only model I've found that can do this.)
Python with sentence-transformers, scikit-learn, numpy
A decent GPU (works "well" on a 4090)
(me again! Again, if anyone can tell me how to post it all somewhere, happy to. And I'm just saying: This IS NOT HARD. I'm a noob, but it's like.. Run LM studio, load the model, bail to a prompt, start the server (something like lm server start) and then python talk\_to\_lyra2.py .. that's it. At the end of a chat? Exit. Wait maybe 10 minutes for it to parse the conversation and "add to its memory hat" .. done. You'll need to make sure python is installed and you need to add a few python pieces by typing PIP whatever, but again, NOT HARD. Then in the directory you'll have 4 json buckets: A you bucket where it places things it learned about you, an AI bucket where it places things it learned or learned about itself that it wants to remember, a "conversation" bucket with summaries of past conversations (and especially the last conversation) and the magic "memory" bucket which ends up looking like text separated by a million numbers. I've tested this thing quite a bit, and though once in a while it will freak and fail due to seemingly hitting context errors, for the most part? Works better than I'd believe.)
| 2025-04-15T06:41:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzl6xd/persistent_memory_simulation_using_local_ai_on/
|
Evening-Active1768
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl6xd
| false | null |
t3_1jzl6xd
|
/r/LocalLLaMA/comments/1jzl6xd/persistent_memory_simulation_using_local_ai_on/
| false | false |
self
| 32 | null |
Novice - Gemini 2.5Pro Rag analysis ?
| 0 |
I wonder what is closest model and Rag application to Gemini 2.5Pro which does some descent analysis of picture with reading patterns , text, and summary it into standard analysis.
Is such a thing possible with local Rag ? If so, some recommendations would be appreciated.
| 2025-04-15T06:43:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzl80t/novice_gemini_25pro_rag_analysis/
|
xUaScalp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl80t
| false | null |
t3_1jzl80t
|
/r/LocalLLaMA/comments/1jzl80t/novice_gemini_25pro_rag_analysis/
| false | false |
self
| 0 | null |
How to run gguf model in production
| 1 |
[removed]
| 2025-04-15T06:45:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzl9ac/how_to_run_gguf_model_in_production/
|
slimshady683
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzl9ac
| false | null |
t3_1jzl9ac
|
/r/LocalLLaMA/comments/1jzl9ac/how_to_run_gguf_model_in_production/
| false | false |
self
| 1 | null |
GMK X2 with AMD 395+ 128GB presale is on. $1999/€1999.
| 22 |
The GMK X2 is available for preorder. It's preorder price is $1999 which is a $400 discount from the regular price. The deposit is $200/€200 and is not refundable. Full payment date starts on May 7th. I guess that means that's when it'll ship.
https://www.gmktec.com/products/prepaid-deposit-amd-ryzen%E2%84%A2-ai-max-395-evo-x2-ai-mini-pc?spm=..product_45f86d6f-d647-4fc3-90a9-fcd3e10a205e.header_1.1&spm_prev=..page_12138669.header_1.1&variant=b81a8517-ea71-49e0-a05c-32a0e48645b9
It doesn't mention anything about the tariff here in the US, which is currently 20% for these things. So I don't know if this is shipped from China where then the buyer is responsible for paying the tariff when it gets held at customs or whether they bulk ship it here and then ship it to the end user. And thus they pay the tariff.
| 2025-04-15T07:10:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzlmon/gmk_x2_with_amd_395_128gb_presale_is_on_19991999/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzlmon
| false | null |
t3_1jzlmon
|
/r/LocalLLaMA/comments/1jzlmon/gmk_x2_with_amd_395_128gb_presale_is_on_19991999/
| false | false |
self
| 22 | null |
[Question/idea] is anyone working on an AI VR electronics assistant?
| 1 |
back some time ago i spent some time attempting to train smaller models to understand and be able to answer questions on electronic repair, mostly of mobile phones, i actually didnt do too bad but i also learned that in general LLMs arent great at understanding circuits or boardviews etc so i know this may be challenging
my idea came when talking about the argument between video microscopes vs real ones for repair, i dont like the disconnection of working on a screen, then i thought "well what if i hooked the output to an oculus? would that help the disconnect?"
then the full idea hit to combine those things, if you could pack an LLM with enough knowledge on repair cases etc, then develop an AI vision system that could identify components etc (i know there are cameras basically made for this purpose) you could create a sort of VR repair assistant, tell it the problem with the device, look at the board, it highlights areas saying "test here for X" etc then helps you diagnose the issue, you could integrate views from the main cams of the VR, microscope cams and FLIR cams etc
obviously this is a project a little beyond me as it would require collecting a huge amount of data and dealing with a lot of vision stuff which isnt really something ive done before, im sure its not impossible but its not something i have time to make happen, plus i figured someone would likely already be working on something like that, and with far more resources than i have
but then i thought that about my idea with the LLM which i had over a year ago now but as yet, as far as im aware none of the major boardview software providers (XXZ, ZXW, Borneo, Pragmafix, JCID etc) have integrated anything like that despite them actually having huge amounts of data at their fingertips already which kind of surprises me given that i did OK with a few models with just a small amount of data, sure they werent always right but you could tell it what seemed to be going wrong and itd generally tell you roughly what to test to find the solution so i imagine someone who knows what theyre doing could make it pretty effective
so is anyone out there working on anything like this?
| 2025-04-15T07:14:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzlop9/questionidea_is_anyone_working_on_an_ai_vr/
|
gaspoweredcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzlop9
| false | null |
t3_1jzlop9
|
/r/LocalLLaMA/comments/1jzlop9/questionidea_is_anyone_working_on_an_ai_vr/
| false | false |
self
| 1 | null |
What LLM model does AugmentCode extension use?
| 1 |
[removed]
| 2025-04-15T07:29:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzlwkw/what_llm_model_does_augmentcode_extension_use/
|
EinsteinOnRedbull
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzlwkw
| false | null |
t3_1jzlwkw
|
/r/LocalLLaMA/comments/1jzlwkw/what_llm_model_does_augmentcode_extension_use/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VjxseD6nRwVh-i696uIzvF7wZsTtQ-EsN8maRbIgtO4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=108&crop=smart&auto=webp&s=eb04f938d14708f4959d310e3cabae32849eadb4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=216&crop=smart&auto=webp&s=4e9fadd45c39aa611e35c22a8709f96fca62fe65', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=320&crop=smart&auto=webp&s=a454b5ea07ccbd933c07409e5d9a3306160d0646', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=640&crop=smart&auto=webp&s=e91ac198543c662c9d7c65bb7f7b48c482eb3079', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=960&crop=smart&auto=webp&s=fa5531065ec5c61bd047eeaa8edb56ca22c2dd84', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?width=1080&crop=smart&auto=webp&s=403cb3d548b1a576f97ab68ee17bc30c85862ea5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vh3VFLfPl_3WFoEUkAN3QpSH-Gm4XDKXsDRXcgulJyw.jpg?auto=webp&s=d41780e84a4135702d8f463b335e2cc7fd606067', 'width': 1200}, 'variants': {}}]}
|
Run LLMs 100% Locally with Docker’s New Model Runner
| 1 |
[removed]
| 2025-04-15T07:34:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzlyt5/run_llms_100_locally_with_dockers_new_model_runner/
|
Arindam_200
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzlyt5
| false | null |
t3_1jzlyt5
|
/r/LocalLLaMA/comments/1jzlyt5/run_llms_100_locally_with_dockers_new_model_runner/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'XWw0_ZiNiBOFUnE_N14NG-xuYRwKOZpi93Q_--Ev9fo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hD7uhCrqaB5KLfiduV5qNF8q1owwWsObCsAjK0Ktx_o.jpg?width=108&crop=smart&auto=webp&s=27876fbe243959baf450eba1b98bcf9d93af2f90', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hD7uhCrqaB5KLfiduV5qNF8q1owwWsObCsAjK0Ktx_o.jpg?width=216&crop=smart&auto=webp&s=f674698fefc1b7b18ee1b7a649a8d5effe92e1f4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hD7uhCrqaB5KLfiduV5qNF8q1owwWsObCsAjK0Ktx_o.jpg?width=320&crop=smart&auto=webp&s=9182c6f3977ae06a28723e431ceca5a69fb10820', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hD7uhCrqaB5KLfiduV5qNF8q1owwWsObCsAjK0Ktx_o.jpg?auto=webp&s=66b0a0ada987da445ed4744fc18847815a221ddd', 'width': 480}, 'variants': {}}]}
|
Call for Speakers - Bring Your r/LocalLLaMA Projects to the AIE World’s Fair!
| 1 |
[removed]
| 2025-04-15T07:43:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzm3oe/call_for_speakers_bring_your_rlocalllama_projects/
|
EveryNebula542
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzm3oe
| false | null |
t3_1jzm3oe
|
/r/LocalLLaMA/comments/1jzm3oe/call_for_speakers_bring_your_rlocalllama_projects/
| false | false |
self
| 1 | null |
Llama2-13b-chat local install problems - tokenizer
| 0 |
Hi everyone, I am trying to download a Llama2 model for one of my applications. I requested the license, followed the instructions provided by META but for some reason the download fails on the level of the tokenizer with the error message:
"*Client error 403 Forbidden for url*"
I am using the authentication url provided to me by META and I even re-requested a license to see if maybe my url had expired but I am running into the same issue. It seems entirely limited to the tokenizer part of the model as I can see that the other parts of the model have been installed.
Has anyone come across this in the past and can help me figure out a solution? Appreciate any advice!
| 2025-04-15T07:54:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzm9b3/llama213bchat_local_install_problems_tokenizer/
|
RDA92
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzm9b3
| false | null |
t3_1jzm9b3
|
/r/LocalLLaMA/comments/1jzm9b3/llama213bchat_local_install_problems_tokenizer/
| false | false |
self
| 0 | null |
"Fear & Loathing in r/LocalLLaMA"
| 1 | 2025-04-15T08:59:29 |
-Ellary-
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzn6m8
| false | null |
t3_1jzn6m8
|
/r/LocalLLaMA/comments/1jzn6m8/fear_loathing_in_rlocalllama/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'XkBa0W9fPA56M9UVF9OETyWnIi860uEyw86-iXFrKOM', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=108&crop=smart&auto=webp&s=e255425405c676f6985cfee5ed50f10fd68ba9c7', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=216&crop=smart&auto=webp&s=2cdaa507dbdc652c1d6f723851fcc4c86552b4cc', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=320&crop=smart&auto=webp&s=c012f9cf823f8784e67fd137ad1ff02ba5ede362', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=640&crop=smart&auto=webp&s=e462020debcce2bf77ea1ad78f858027052730e2', 'width': 640}, {'height': 409, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=960&crop=smart&auto=webp&s=8d4963fa4afa639249af1c838a88b0144258439a', 'width': 960}, {'height': 460, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?width=1080&crop=smart&auto=webp&s=10ea21353a6f773b68be772f92e04bf55f2169e3', 'width': 1080}], 'source': {'height': 820, 'url': 'https://preview.redd.it/fa85ssg6ryue1.png?auto=webp&s=194783c334b72500256c1fb3687c4b04f3df5be4', 'width': 1924}, 'variants': {}}]}
|
|||
Fear and Loathing in LocalLLaMA
| 1 | 2025-04-15T09:02:56 |
-Ellary-
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzn8hl
| false | null |
t3_1jzn8hl
|
/r/LocalLLaMA/comments/1jzn8hl/fear_and_loathing_in_localllama/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'kgBeOIPkvEu18IX1vqCoJuKoVQsz64HWyuClRsOOGCY', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=108&crop=smart&auto=webp&s=aae01c2ac1f7965bc09b4918702d87a9617e62dd', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=216&crop=smart&auto=webp&s=bb81b11904d38eddb25f42e678630f0f5bbd2d2d', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=320&crop=smart&auto=webp&s=e46894e659c28e0771e02df4000f353542137b82', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=640&crop=smart&auto=webp&s=9a6f23b2003ac6c453648b73f5d99ad239252f46', 'width': 640}, {'height': 409, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=960&crop=smart&auto=webp&s=2ef1ab7da8a905ec5d17195af7d69516694fb9aa', 'width': 960}, {'height': 460, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?width=1080&crop=smart&auto=webp&s=bc4eded28a9e3fcbfd4b2b91639436a25a865f20', 'width': 1080}], 'source': {'height': 820, 'url': 'https://preview.redd.it/nkxgj0zrryue1.png?auto=webp&s=f3a48e3ae9d42536b61059b73fff7f2f764155cb', 'width': 1924}, 'variants': {}}]}
|
|||
New open-source model GLM-4-32B with performance comparable to Qwen 2.5 72B
| 274 |
The model is from ChatGLM (now Z.ai). A reasoning, deep research and 9B version are also available (6 models in total). MIT License.
Everything is on their GitHub: https://github.com/THUDM/GLM-4
The benchmarks are impressive compared to bigger models but I'm still waiting for more tests and experimenting with the models.
| 2025-04-15T09:05:52 |
adrgrondin
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzn9wj
| false | null |
t3_1jzn9wj
|
/r/LocalLLaMA/comments/1jzn9wj/new_opensource_model_glm432b_with_performance/
| false | false | 274 |
{'enabled': True, 'images': [{'id': '202XZL4aKggXkUmoLqMebcztvI1ktQNEVNEeSVz_vN4', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=108&crop=smart&auto=webp&s=3a3f3da623a36173a1e04c5a0a3601055f18dbe0', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=216&crop=smart&auto=webp&s=f42d6710af4f819f4c28316930cee37ee58d4cab', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=320&crop=smart&auto=webp&s=81e836874fb088dede88880a4e9d898f437ab023', 'width': 320}, {'height': 299, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=640&crop=smart&auto=webp&s=4861ca1813294a35c850cb947f8c2dbf56ea3e68', 'width': 640}, {'height': 449, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=960&crop=smart&auto=webp&s=766f14c1d95fe993b183b754acd39c7088731d21', 'width': 960}, {'height': 506, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?width=1080&crop=smart&auto=webp&s=5a45670e5f9fe1ad6c3d3defb1284dc0f5e08dff', 'width': 1080}], 'source': {'height': 1372, 'url': 'https://preview.redd.it/6pogmi3isyue1.jpeg?auto=webp&s=eb964ae224bb3a77e0b31a27f4beca2ce1a78f31', 'width': 2928}, 'variants': {}}]}
|
||
It's good to download a small open local model, what can go wrong?
| 183 | 2025-04-15T09:11:56 |
-Ellary-
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzncvp
| false | null |
t3_1jzncvp
|
/r/LocalLLaMA/comments/1jzncvp/its_good_to_download_a_small_open_local_model/
| false | false | 183 |
{'enabled': True, 'images': [{'id': '7EExkTySTjeUD2mFQkYcgFtWE_qGJeHRpqokvFc0VlE', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=108&crop=smart&auto=webp&s=266b52b019ee05e9efeb2405721b800188849aaf', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=216&crop=smart&auto=webp&s=c8e9968fb8e58e7bd3d09d67d47040ee074e6c59', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=320&crop=smart&auto=webp&s=66567fe72c012fd5b052f5718b1fd7314a6e4d7d', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=640&crop=smart&auto=webp&s=5522d29ded8ded62ddd3b3dc760eaaebe2b5bb5f', 'width': 640}, {'height': 409, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=960&crop=smart&auto=webp&s=4bf6547f8d8c4035827963c3e7baca08fa43181e', 'width': 960}, {'height': 460, 'url': 'https://preview.redd.it/tbm102jesyue1.png?width=1080&crop=smart&auto=webp&s=154f36fa4616752014ca3234cf24635afbc0f7af', 'width': 1080}], 'source': {'height': 820, 'url': 'https://preview.redd.it/tbm102jesyue1.png?auto=webp&s=45565a4124888a0b0c212e1b23df2c1efad3b9e4', 'width': 1924}, 'variants': {}}]}
|
|||
Qwen 3 coming soon?
| 1 |
[removed]
| 2025-04-15T09:21:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jznheq/qwen_3_coming_soon/
|
joaomsimoes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jznheq
| false | null |
t3_1jznheq
|
/r/LocalLLaMA/comments/1jznheq/qwen_3_coming_soon/
| true | false |
spoiler
| 1 |
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=e54f107e56edffe1539ba485aa935f9652ecee09', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6b12e7dc49c860e9b835ba5e80cb5130e35273f1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=06b8d650512f8f41026ac0751cd8bba1530e8eff', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7d5a4256f34c1edfb84a79b0307c71bda562ceb0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=6df5ae4cd9ec9c9172f50327a49d3f25babdba41', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=84199754eb0e53e8f5f22538d1b54272359444d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?blur=40&format=pjpg&auto=webp&s=88a8389fa320d226172daab118e1465dfed6c3bd', 'width': 1200}}}}]}
|
llama or chat GPT?
| 1 |
[removed]
| 2025-04-15T09:21:44 |
Immediate_Chef_205
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jznhok
| false | null |
t3_1jznhok
|
/r/LocalLLaMA/comments/1jznhok/llama_or_chat_gpt/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Ed6FN78lGzGE3V3o9aQpVyGzy-uqBhJAJhddKwWNg3M', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=108&crop=smart&auto=webp&s=27b1eda80b3eb243969165369cf4b0443966971d', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=216&crop=smart&auto=webp&s=30f9456c91801626af81e7291631371864893e8d', 'width': 216}, {'height': 346, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=320&crop=smart&auto=webp&s=180d9aac7380d46b3faddcd5051c78034907400c', 'width': 320}, {'height': 692, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=640&crop=smart&auto=webp&s=4f1c19531ff0c6972543706400ead7b84c74815f', 'width': 640}, {'height': 1038, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=960&crop=smart&auto=webp&s=b574c69188109bb4fce41274cfc9b5ad523364e4', 'width': 960}, {'height': 1168, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?width=1080&crop=smart&auto=webp&s=e7239908e6eb0b46ebe8bc862ab87b60689e4888', 'width': 1080}], 'source': {'height': 1510, 'url': 'https://preview.redd.it/ywbu60y8vyue1.png?auto=webp&s=07be9c415c8e7574b521359ab4179714f101897d', 'width': 1396}, 'variants': {}}]}
|
||
Devoxx + PHPStorm + LM Studio -> LLaMA4 Scout context length
| 0 |
Hi, I got project with \~220k tokens, set in LM Studio for Scout 250k tokens context length. But Devoxx just still sees 8k tokens for all local models. In Settings you can set for online models any context length you want, but not for local. How to increase it?
| 2025-04-15T09:27:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jznkkh/devoxx_phpstorm_lm_studio_llama4_scout_context/
|
H4UnT3R_CZ
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jznkkh
| false | null |
t3_1jznkkh
|
/r/LocalLLaMA/comments/1jznkkh/devoxx_phpstorm_lm_studio_llama4_scout_context/
| false | false |
self
| 0 | null |
Build advice
| 1 |
Hi,
I'm a doctor and we want to begin meddling with AI in my hospital.
We are in France
We have a budget of 5 000 euros
We want to o ifferent AII project with Ollama, Anything AI, ....
And
We will conduct analysis on radiology data. (I don't know how to translate it properly, but we'll compute MRI TEP images, wich are quite big. An MRI being hundreds of slices pictures reconstructed in 3D).
We only need the tower.
Thanks for your help.
| 2025-04-15T09:41:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jznrh6/build_advice/
|
fra5436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jznrh6
| false | null |
t3_1jznrh6
|
/r/LocalLLaMA/comments/1jznrh6/build_advice/
| false | false |
self
| 1 | null |
Benchmarks A-Z - Where to start from?
| 1 |
[removed]
| 2025-04-15T10:10:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzo70g/benchmarks_az_where_to_start_from/
|
Apprehensive_Win662
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzo70g
| false | null |
t3_1jzo70g
|
/r/LocalLLaMA/comments/1jzo70g/benchmarks_az_where_to_start_from/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'M2-Pt3m2SoTDmNXWtEspHFowPctIoTXYe7vhSTKtsVI', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=108&crop=smart&auto=webp&s=c15d53f07712fb1aee36a2c8ef89acfb41325766', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=216&crop=smart&auto=webp&s=3f18b572bf47b247ffa698aa7f21bb8be81e379c', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=320&crop=smart&auto=webp&s=b80d8fdcd8c2cc184985301fc5433decbb342f15', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=640&crop=smart&auto=webp&s=0169d89a679afe29c9a4b6b89fb22e9f27236d5a', 'width': 640}, {'height': 578, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=960&crop=smart&auto=webp&s=3fb8236ac4eebbac862a62ad825b95ce08648020', 'width': 960}, {'height': 651, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?width=1080&crop=smart&auto=webp&s=3c3767746bd68bf3a5b0d35beaf7210c0c3e8595', 'width': 1080}], 'source': {'height': 950, 'url': 'https://external-preview.redd.it/doGhasIAz-cy0gc4GoDlphLlfOrVrMQy08KsdnNbevg.jpg?auto=webp&s=202b8863b496a2e844ddd8b4dfec8808fb334795', 'width': 1576}, 'variants': {}}]}
|
Finally someone noticed this unfair situation
| 1,513 |
[I have the same opinion](https://preview.redd.it/f3kbm3p73zue1.png?width=1198&format=png&auto=webp&s=626a7ab843545471bcd88509c4daaebaa4d44d79)
And in Meta's recent Llama 4 release [blog post](https://ai.meta.com/blog/llama-4-multimodal-intelligence/), in the "Explore the Llama ecosystem" section, Meta thanks and acknowledges various companies and partners:
[Meta's blog](https://preview.redd.it/85yqglbi4zue1.png?width=1476&format=png&auto=webp&s=b1292382820cafc658d65bb71ca5bcf15ef0bf1b)
Notice how **Ollama** is mentioned, but there's no acknowledgment of **llama.cpp** or its creator **ggerganov**, whose foundational work made much of this ecosystem possible.
Isn't this situation incredibly ironic? The original project creators and ecosystem founders get forgotten by big companies, while YouTube and social media are flooded with clickbait titles like "Deploy LLM with one click using Ollama."
Content creators even deliberately blur the lines between the complete and distilled versions of models like DeepSeek R1, using the R1 name indiscriminately for marketing purposes.
Meanwhile, the foundational projects and their creators are forgotten by the public, never receiving the gratitude or compensation they deserve. The people doing the real technical heavy lifting get overshadowed while wrapper projects take all the glory.
What do you think about this situation? Is this fair?
| 2025-04-15T10:21:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzocoo/finally_someone_noticed_this_unfair_situation/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzocoo
| false | null |
t3_1jzocoo
|
/r/LocalLLaMA/comments/1jzocoo/finally_someone_noticed_this_unfair_situation/
| false | false | 1,513 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
|
Issue huggingface.co/models
| 1 |
[removed]
| 2025-04-15T10:23:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzodov/issue_huggingfacecomodels/
|
Careful-Draw-6572
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzodov
| false | null |
t3_1jzodov
|
/r/LocalLLaMA/comments/1jzodov/issue_huggingfacecomodels/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.