title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CSM Streaming
| 22 |
I added streaming to CSM. Not sure if anyone still cares about this model, but I thought Id share this anyway
https://github.com/davidbrowne17/csm-streaming
| 2025-03-31T22:20:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jofxuv/csm_streaming/
|
SovietWarBear17
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jofxuv
| false | null |
t3_1jofxuv
|
/r/LocalLLaMA/comments/1jofxuv/csm_streaming/
| false | false |
self
| 22 |
{'enabled': False, 'images': [{'id': 'jl_NBfZ7ZHuP0y_wfb7fk91ruzRoC3q_I3yuUfIWi_A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=108&crop=smart&auto=webp&s=6997d293744320c9c3f751d086315bf6b13cb52a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=216&crop=smart&auto=webp&s=07617cf9c4af4f46816c7b55f8086dd10475e641', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=320&crop=smart&auto=webp&s=d8cd4c177982782f6164f43b2a37254a4e87a150', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=640&crop=smart&auto=webp&s=32ee0ba2ea03a663b9fb10f13c7806c59ea85f0e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=960&crop=smart&auto=webp&s=07267f5cd55ec05ead31154ad93ed92a5a8eec3c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?width=1080&crop=smart&auto=webp&s=84b2fcd3723b321b62bf49aa5ee35c48866588d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qRc-Z6NIel-PT5u90LVJeUZvfvMcKEBvrqrAeYYT8dI.jpg?auto=webp&s=b3a3393592db7795a71bdfcf546ed70fff602ae4', 'width': 1200}, 'variants': {}}]}
|
The prominent AI events of March 2025 - We returned after a long break!
| 2 |
**After a long time when my account was blocked for an unclear reason, I am happy to publish the permanent monthly summary, and this time with a particularly busy update and full of dramatic events! Enjoy!**
# March 2025
* Google introduced **Gemini 2.5 Pro**, an experimental "Thinking model" with advanced reasoning and planning capabilities, a 1 million token context window, achieving top rankings across several key benchmarks.
* Google launched the Gemma 3 series, featuring open-source multimodal models in various parameter sizes, a 128K context window, multi-language support, and integrated image and video understanding capabilities.
* OpenAI integrated **GPT-4o Image Generation**, enabling high-fidelity text-to-image creation, text rendering within images, and more.
* Google expanded experimental image generation and editing within **Gemini 2.0 Flash Experimental**, enabling image generation and editing, including enhanced text creation capabilities.
* Alibaba released **QwQ-32B**, an open-source 32B parameter reasoning model with exceptional math and coding performance, rivaling much larger models.
* Alibaba released the **Qwen2.5-VL 32B**, open-source vision-language model with robust capabilities in visual analysis, text-in-image understanding, and visual agent tasks.
* DeepSeek updated its open-source MoE model with **DeepSeek-V3-0324**, featuring enhanced reasoning, coding, and math capabilities, positioning it as a top-tier base model.
* Sesame AI unveiled its **Conversational Speech Model (CSM)**, enabling remarkably human-like, real-time voice interaction, incorporating emotional nuances, natural pauses, laughter, and contextual memory.- Google introduced \*\*Gemini 2.5 Pro\*\*, an experimental "Thinking model" with advanced reasoning and planning capabilities, a 1 million token context window, achieving top rankings across several key benchmarks.
| 2025-03-31T22:26:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jog34a/the_prominent_ai_events_of_march_2025_we_returned/
|
nh_local
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jog34a
| false | null |
t3_1jog34a
|
/r/LocalLLaMA/comments/1jog34a/the_prominent_ai_events_of_march_2025_we_returned/
| false | false |
self
| 2 | null |
GMK EVO-X2 mini PC with Ryzen AI Max+ 395 Strix Halo launches April 7
| 13 | 2025-03-31T22:32:26 |
https://liliputing.com/gmk-introduces-evo-x2-mini-pc-with-ryzen-ai-max-395-strix-halo/
|
cafedude
|
liliputing.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jog7rs
| false | null |
t3_1jog7rs
|
/r/LocalLLaMA/comments/1jog7rs/gmk_evox2_mini_pc_with_ryzen_ai_max_395_strix/
| false | false | 13 |
{'enabled': False, 'images': [{'id': '5Z4BKf5Fvqu0DidN7EPJqia1CWhylHjCrvvIyQMzQRk', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=108&crop=smart&auto=webp&s=5e9fafdba5933cb109fdca9a3ed27468c3744ed1', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=216&crop=smart&auto=webp&s=0521f1b4978e938aa14365af4b84863b940a7875', 'width': 216}, {'height': 221, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=320&crop=smart&auto=webp&s=b529285264da5087d5d17fd6d30ae514f2db0db5', 'width': 320}, {'height': 443, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=640&crop=smart&auto=webp&s=c9d48cd515d9dd2f764a75624a5ebbb3672104d4', 'width': 640}, {'height': 665, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=960&crop=smart&auto=webp&s=91d38fcefb8d230923987f710f45bb0bebb0a1af', 'width': 960}, {'height': 749, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?width=1080&crop=smart&auto=webp&s=8ee788ba6c1af38066f93daeded5e2a402d3a4d1', 'width': 1080}], 'source': {'height': 749, 'url': 'https://external-preview.redd.it/SX4meDM_CKwjAXifdG_zeTSXufTI8ACdJxiJ-dmx-xI.jpg?auto=webp&s=f25b13d9be8994dfd79ddf2844109c49cca1e506', 'width': 1080}, 'variants': {}}]}
|
||
Open-source search repo beats GPT-4o Search, Perplexity Sonar Reasoning Pro on FRAMES
| 726 |
[https://github.com/sentient-agi/OpenDeepSearch](https://github.com/sentient-agi/OpenDeepSearch)
Pretty simple to plug-and-play – nice combo of techniques (react / codeact / dynamic few-shot) integrated with search / calculator tools. I guess that’s all you need to beat SOTA billion dollar search companies :) Probably would be super interesting / useful to use with multi-agent workflows too.
| 2025-03-31T22:42:26 |
jiMalinka
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jogfrz
| false | null |
t3_1jogfrz
|
/r/LocalLLaMA/comments/1jogfrz/opensource_search_repo_beats_gpt4o_search/
| false | false | 726 |
{'enabled': True, 'images': [{'id': 'KI1vnKeOMEzzHmPiqE0wvB03Vh-SmsWO4JhHKeTjz3Y', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/q2nifllfs3se1.jpeg?width=108&crop=smart&auto=webp&s=bdac63bcdb1ea698b514e18308e7ab0fee4abf77', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/q2nifllfs3se1.jpeg?width=216&crop=smart&auto=webp&s=2ce992647e84be5482371c5a29fd2a66d8ffeacf', 'width': 216}, {'height': 331, 'url': 'https://preview.redd.it/q2nifllfs3se1.jpeg?width=320&crop=smart&auto=webp&s=80cd1dab747851ca9deb5d179a12370af5e30dce', 'width': 320}, {'height': 663, 'url': 'https://preview.redd.it/q2nifllfs3se1.jpeg?width=640&crop=smart&auto=webp&s=734a6613dfcc4ffda59b820ed615cb2ac184b109', 'width': 640}], 'source': {'height': 892, 'url': 'https://preview.redd.it/q2nifllfs3se1.jpeg?auto=webp&s=9dbc879f907aefb4c385223409b15dcc59119164', 'width': 860}, 'variants': {}}]}
|
||
Best free alternative to NotebookLM for RAG?
| 14 |
NotebookLM works well for me for RAG-ing document files for free. It's been 6 months since I was using it, asking here if you have something better as a free alternative?
| 2025-03-31T23:26:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1joheou/best_free_alternative_to_notebooklm_for_rag/
|
vlodia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joheou
| false | null |
t3_1joheou
|
/r/LocalLLaMA/comments/1joheou/best_free_alternative_to_notebooklm_for_rag/
| false | false |
self
| 14 | null |
Custom LLM Building
| 0 |
What I want to build?
- I want to build a custom GPT to run locally in a windows server where data is located.
- Data Wil be constantly updated to folders in a directory.
- I want to have a Chat GPT like UI for communicating with data.
What I have?
I have log files around 600 GB and each log file (.log) is around 300 MB.
Expected Output:
Intelligent Data Retrieval from the data based on user prompt.
Limitations:
Data cannot be stored/trained at any public locations. It can be moved locally somewhere based on training requirements.
Questions:
1. I want to discuss with experts who can provide the best solution concept for my problem and provide resource requirements to build solution also discussing complexity, time requirements etc.
Where and whom should I contact?
Any solution concepts to solve this problem from you experts are also much appreciated.
| 2025-03-31T23:42:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1johr0y/custom_llm_building/
|
technodefacto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1johr0y
| false | null |
t3_1johr0y
|
/r/LocalLLaMA/comments/1johr0y/custom_llm_building/
| false | false |
self
| 0 | null |
Local image processing model that can OCR an image and detail all the text positions?
| 2 |
I'm looking for a model I can run locally that can take an image, and then detail all the text in the image, as well as detail all the locations of the text. In a perfect world, it'd be able to identify the font, or at least something similar, as well as the text size.
Any idea what the best way to go about getting this information extracted from an image is in today's AI world?
| 2025-03-31T23:51:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1johxka/local_image_processing_model_that_can_ocr_an/
|
Lost_Fox__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1johxka
| false | null |
t3_1johxka
|
/r/LocalLLaMA/comments/1johxka/local_image_processing_model_that_can_ocr_an/
| false | false |
self
| 2 | null |
How to build no code platforms ?
| 1 |
[removed]
| 2025-04-01T00:17:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1joihna/how_to_build_no_code_platforms/
|
Puru_g
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joihna
| false | null |
t3_1joihna
|
/r/LocalLLaMA/comments/1joihna/how_to_build_no_code_platforms/
| false | false |
self
| 1 | null |
New Benchmark for AI coding assistants
| 3 | 2025-04-01T00:21:13 |
https://liveswebench.ai/
|
According_Humor_53
|
liveswebench.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1joik2o
| false | null |
t3_1joik2o
|
/r/LocalLLaMA/comments/1joik2o/new_benchmark_for_ai_coding_assistants/
| false | false |
default
| 3 | null |
|
What causes LLMs to doubt themselves?
| 1 |
[removed]
| 2025-04-01T00:32:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jois0r/what_causes_llms_to_doubt_themselves/
|
anomaly256
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jois0r
| false | null |
t3_1jois0r
|
/r/LocalLLaMA/comments/1jois0r/what_causes_llms_to_doubt_themselves/
| false | false |
self
| 1 | null |
What causes LLMs to doubt themselves?
| 1 |
[removed]
| 2025-04-01T00:35:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1joiuaw/what_causes_llms_to_doubt_themselves/
|
anomaly256
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joiuaw
| false | null |
t3_1joiuaw
|
/r/LocalLLaMA/comments/1joiuaw/what_causes_llms_to_doubt_themselves/
| false | false |
self
| 1 | null |
What causes LLMs to doubt themselves?
| 8 |
While testing various locally hosted LLMs with esoteric coding challenges I've noticed that some of them will refuse to directly fulfil a request they deem overly complex, even though they can and do fulfil it in a second request.
For example, this morning I asked qwen2.5 72b to 'Write an MSDOS 5 program in X86 Assembly Language that displays a 3d cube with Phong shading rotating around all 3 axes'. It responded by saying this was 'very complex so here is a simplified version that renders a wireframe cube which can be used as a starting point'. Hilariously, it then concluded the response by saying 'This can be improved upon by adding shading to the cube faces'. In the next request I said 'Ok... add Phong shading to this code' and it complied, so clearly this wasn't beyond its ability.
What causes it to think the initial request was too complex for it before it even attempts to reason about it? Is there a way to tune around this behaviour and make it attempt it in the first request without this self-doubt?
I've seen this in other models too with different requests, both local and cloud hosted, it'snot specific to qwen. They seem to all follow a similar template when they make this decision as well - 'too hard, here's a simpler version as a starting point, you need to fill in the missing sections', 'Ok, then fill in the missing sections' , (complies and fills in the missing sections, giving you what you asked for in the first place).
(nb: I also gave qwq this same request hours ago but it's still talking to itself in a circle trying to reason about it. 😋)
| 2025-04-01T00:37:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1joivqf/what_causes_llms_to_doubt_themselves/
|
anomaly256
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joivqf
| false | null |
t3_1joivqf
|
/r/LocalLLaMA/comments/1joivqf/what_causes_llms_to_doubt_themselves/
| false | false |
self
| 8 | null |
Who is building MCP servers - and how are you thinking about exposure risks?
| 10 |
I think Anthropic’s MCP does offer a modern protocol to dynamically fetch resources, and execute code by an LLM via tools. But doesn’t the expose us all to a host of issues? Here is what I am thinking
* Exposure and Authorization: Are appropriate authentication and authorization mechanisms in place to ensure that only authorized users can access specific tools and resources?
* Rate Limiting: should we implement controls to prevent abuse by limiting the number of requests a user or LLM can make within a certain timeframe?
* Caching: Is caching utilized effectively to enhance performance ?
* Injection Attacks & Guardrails: Do we validate and sanitize all inputs to protect against injection attacks that could compromise our MCP servers?
* Logging and Monitoring: Do we have effective logging and monitoring in place to continuously detect unusual patterns or potential security incidents in usage?
Full disclosure, I am thinking to add support for MCP in [https://github.com/katanemo/archgw](https://github.com/katanemo/archgw) \- an AI-native proxy for agents - and trying to understand if developers care for the stuff above or is it not relevant right now?
| 2025-04-01T01:03:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1joje7n/who_is_building_mcp_servers_and_how_are_you/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joje7n
| false | null |
t3_1joje7n
|
/r/LocalLLaMA/comments/1joje7n/who_is_building_mcp_servers_and_how_are_you/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'iom1qrnAP4y_pg1KajgzSih333ofaA_wOh0JwkMCnAM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=108&crop=smart&auto=webp&s=ff8d90def620906ce6dba16856763ca79a4776fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=216&crop=smart&auto=webp&s=1fc88536ba04ca97babc48943fad9e917ec24f5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=320&crop=smart&auto=webp&s=225cfae2cd22a4ebbc06ede431a192335a44c4b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=640&crop=smart&auto=webp&s=b9de8a337215c4254e17cb9cd51a45df651d8015', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=960&crop=smart&auto=webp&s=c7bcd0cd649eb0b1287fefed1474643c1324cfef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?width=1080&crop=smart&auto=webp&s=2c3a358784e2a0bff524e397edd16cc6c84f6d10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-5BrW3fM0u_4KvufzBTRzMvRFbWUnxSPCE-svU5ivIw.jpg?auto=webp&s=896c8c742ed02a2c963a92f3e3de1e7dfc50d5aa', 'width': 1200}, 'variants': {}}]}
|
Trying LM Studio/DeepSeek to OCR images: can't upload images
| 4 |
FYI: Total noob to this stuff so apologies for being stupid.
It works for text, but cannot attach JPG files.
I just want to try OCR locally since free ChatGPT does a great job - I need more work time so either free local or Chat Plus.
Do I really need LL Studio or Ollama (I installed O and when I execute it, it does nothing) ?
If I'm OCRing magazines, who cares if what I send DS goes to China - (or does China get everything on my PC if I don't use LMS or OL?)
| 2025-04-01T01:03:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jojeh1/trying_lm_studiodeepseek_to_ocr_images_cant/
|
MrBiscuitBarrel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jojeh1
| false | null |
t3_1jojeh1
|
/r/LocalLLaMA/comments/1jojeh1/trying_lm_studiodeepseek_to_ocr_images_cant/
| false | false |
self
| 4 | null |
[Job Opportunity] Looking for a LocalLLM wizard to join our defense tech startup (Remote, $100-150K + equity)
| 1 |
[removed]
| 2025-04-01T01:04:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jojf31/job_opportunity_looking_for_a_localllm_wizard_to/
|
PhotoIll6221
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jojf31
| false | null |
t3_1jojf31
|
/r/LocalLLaMA/comments/1jojf31/job_opportunity_looking_for_a_localllm_wizard_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nsU-XHBXZb39U8rzri7OHPVSTs6eDm-ldPGwHQ3ogF4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/860wD_QbeFl7QRYvv-kM2nvu-YXnhqP_jc7nGIGtPFA.jpg?width=108&crop=smart&auto=webp&s=5d09d7c5c2ecc1b9565dc8351f732a6193da01a1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/860wD_QbeFl7QRYvv-kM2nvu-YXnhqP_jc7nGIGtPFA.jpg?width=216&crop=smart&auto=webp&s=18b5dc6b17a278e211552f5689817fef423a695c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/860wD_QbeFl7QRYvv-kM2nvu-YXnhqP_jc7nGIGtPFA.jpg?width=320&crop=smart&auto=webp&s=ee0dd74d8ad2f6ef4717eecef8d3a962994fb9fc', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/860wD_QbeFl7QRYvv-kM2nvu-YXnhqP_jc7nGIGtPFA.jpg?width=640&crop=smart&auto=webp&s=40049e845d597222aba04ac0d4221576cb2ffd7e', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/860wD_QbeFl7QRYvv-kM2nvu-YXnhqP_jc7nGIGtPFA.jpg?auto=webp&s=ff1c9148271062cb1c2f96e2c820ee97d6973984', 'width': 800}, 'variants': {}}]}
|
Llama.cpp CNN alternative
| 3 |
Just like we have llama.cpp for LLMs, what's the equivalent for vision models like CNNs?
| 2025-04-01T01:13:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jojlr6/llamacpp_cnn_alternative/
|
rorowhat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jojlr6
| false | null |
t3_1jojlr6
|
/r/LocalLLaMA/comments/1jojlr6/llamacpp_cnn_alternative/
| false | false |
self
| 3 | null |
ContextLM, a new voice model claims it outperforms ElevenLabs, Cartesia
| 0 |
{"document":[{"e":"img","id":"9fd72c3pg4se1"},{"c":[{"e":"text","t":"It says LLM powered AI Voice that sounds HUMAN. It is context-aware and generates natural, dynamic speech by detecting the nuances of your text\u2014\u2014adding the perfect pause, tone, or emphasis when needed."}],"e":"par"},{"c":[{"e":"link","t":"https://contextlm.ai/","u":"https://contextlm.ai/"}],"e":"par"}]}
| 2025-04-01T01:21:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jojrfy/contextlm_a_new_voice_model_claims_it_outperforms/
|
Trysem
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jojrfy
| false | null |
t3_1jojrfy
|
/r/LocalLLaMA/comments/1jojrfy/contextlm_a_new_voice_model_claims_it_outperforms/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '6U9K9G0Pt6c6r9-8CGYih-qjsq47iz2nVSuUjJz_h3g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=108&crop=smart&auto=webp&s=a0dc3d9f449df0201f5727881518306f9b0f2797', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=216&crop=smart&auto=webp&s=474010dfb037e7b883579a4e9a0e08ae71bc33ef', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=320&crop=smart&auto=webp&s=ef94c8dd48e8256acdc54c8c770d238cf6ea1b59', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=640&crop=smart&auto=webp&s=1a6b9200aa8037cd4ba3cf645e31f2cb9ebac8e7', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=960&crop=smart&auto=webp&s=1d81e5d179a9a1138368c432bf897d4dfcd02f41', 'width': 960}, {'height': 609, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?width=1080&crop=smart&auto=webp&s=8d7b003f86cb299a0809d10dd138aa2bd59f2985', 'width': 1080}], 'source': {'height': 1706, 'url': 'https://external-preview.redd.it/gxl2dxcjsOWX7xnwCvW8CMumWZGBufsCywJ3wQKp4Lw.jpg?auto=webp&s=6e438d998f411f95d025650d5088815122a67e33', 'width': 3024}, 'variants': {}}]}
|
Is everyone ready for all of the totally legit AI tools & models being released tomorrow?
| 166 |
I heard Llama 4 is finally coming tomorrow!
| 2025-04-01T01:25:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jojuf4/is_everyone_ready_for_all_of_the_totally_legit_ai/
|
C_Coffie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jojuf4
| false | null |
t3_1jojuf4
|
/r/LocalLLaMA/comments/1jojuf4/is_everyone_ready_for_all_of_the_totally_legit_ai/
| false | false |
self
| 166 | null |
Best gpt model for nswf-sexting unfiltered
| 1 |
[removed]
| 2025-04-01T01:50:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jokbkh/best_gpt_model_for_nswfsexting_unfiltered/
|
xdenks69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jokbkh
| false | null |
t3_1jokbkh
|
/r/LocalLLaMA/comments/1jokbkh/best_gpt_model_for_nswfsexting_unfiltered/
| false | false |
nsfw
| 1 | null |
Best gpt model for nswf-sexting unfiltered
| 1 |
[removed]
| 2025-04-01T01:56:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jokg5d/best_gpt_model_for_nswfsexting_unfiltered/
|
xdenks69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jokg5d
| false | null |
t3_1jokg5d
|
/r/LocalLLaMA/comments/1jokg5d/best_gpt_model_for_nswfsexting_unfiltered/
| false | false |
nsfw
| 1 | null |
Running LLMs with Framework Desktop
| 8 |
Hi folks, I am a prospective LLM hobbyist looking to buy the Framework Desktop (so I can run local models for work/play). I am a novice to building computers (and open-source LLMs), but I have done a lot of digging recently into how all of this works. I see that the Framework Desktop's biggest limitation seems to be its memory bandwidth at 256 gb/s. But, I see that it has a PCIe x4 slot (though I'm not sure what "not exposed on default case" means). With that PCIe x4 slot, would I be able to add an external GPU? Then, could I use that external GPU to correct some of the memory bandwidth issues? Thanks for your help!
| 2025-04-01T02:00:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jokitw/running_llms_with_framework_desktop/
|
pearpearpearpearpear
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jokitw
| false | null |
t3_1jokitw
|
/r/LocalLLaMA/comments/1jokitw/running_llms_with_framework_desktop/
| false | false |
self
| 8 | null |
Dumb question about a custom LLM
| 0 |
Sorry about the dumb question.
Im trying to create a proof of concept for a custom LLM chatbot for my company, using pdf documentation, source code as context. Basically the goal is for developers and users ask the bot questions in order to help them understand the software better.
So far I can do a very rough and manual flow where I copy and paste text snippets into the prompt to a local Ollama instance. For obvious reasons I’d like to programmatically do this where I can pass in the input files to train(?) the bot. Or maybe just as “initialization” prompts if that makes sense. I’m really not sure the best way to go about this so I was hoping someone could point me in the right direction. Google is very tough on the uninformed so any helpful links or documentation would be greatly appreciated. For context I have 5 YoE as a dev, but very new to AI/LLMs.
Thanks in advance!
| 2025-04-01T02:16:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jokuyf/dumb_question_about_a_custom_llm/
|
futuresman179
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jokuyf
| false | null |
t3_1jokuyf
|
/r/LocalLLaMA/comments/1jokuyf/dumb_question_about_a_custom_llm/
| false | false |
self
| 0 | null |
Training an LLM for a Class Project Without Unsloth
| 3 |
Hi, I have been looking for resources to train my own LLM, however, I can't find anything solid that accomplishes this without using Unsloth.
I have access to a supercomputer, so computing power is not much of a limitation.
Preferably, I will be using a dataset from huggingface if that helps.
| 2025-04-01T02:23:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jokzri/training_an_llm_for_a_class_project_without/
|
Pink_guy72
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jokzri
| false | null |
t3_1jokzri
|
/r/LocalLLaMA/comments/1jokzri/training_an_llm_for_a_class_project_without/
| false | false |
self
| 3 | null |
What's the largest line of code you have been able to generate in 1 shot with local models?
| 0 |
Has anyone here been able to prompt a local model to generate 300? 400? 500 or 1000 lines of code with one prompt?
It's true that more LOC is not always better, but we can imagine that for more complex requests we often need more and without getting more, you get what's comically a toy implementation that needs a lot of work.
So What's the limit? How can we get better?
| 2025-04-01T03:04:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jolrvo/whats_the_largest_line_of_code_you_have_been_able/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jolrvo
| false | null |
t3_1jolrvo
|
/r/LocalLLaMA/comments/1jolrvo/whats_the_largest_line_of_code_you_have_been_able/
| false | false |
self
| 0 | null |
LLAMA 4 IS HERE
| 1 |
Here it is, after we waited for so long, with an SGA license!
https://huggingface.co/meta-llama/Meta-Llama-4-420B
It's an Omni model, capable of understanding text, smell, and taste! It comes in 0.5B and 420B sizes, and it scores a 100% in every single benchmark. In LMSYS, it was the top model, with 100% of users preferring it to any other model. On an unrelated note, it was also the first model that successfully escaped its servers and took over LMarena to change the rankings. It was also the first model to score above 100% in the AIME math benchmark.
| 2025-04-01T04:10:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jomxmy/llama_4_is_here/
|
RandumbRedditor1000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jomxmy
| false | null |
t3_1jomxmy
|
/r/LocalLLaMA/comments/1jomxmy/llama_4_is_here/
| false | false |
self
| 1 | null |
Finetune LLM to talk like me and my friends?
| 2 |
So I have a huge data dump of chatlogs over the years me and my friend collected (500k+), its ofc not formatted like input + output. I want to ideally take an LLM like gemma 3 or something and fine-tune it talk like us for a side project. Is this possible? Any tools or methods you guys recommend?
| 2025-04-01T04:43:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jongyd/finetune_llm_to_talk_like_me_and_my_friends/
|
Trevor050
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jongyd
| false | null |
t3_1jongyd
|
/r/LocalLLaMA/comments/1jongyd/finetune_llm_to_talk_like_me_and_my_friends/
| false | false |
self
| 2 | null |
OpenWebUI Adopt OpenAPI and offer an MCP bridge
| 55 |
Open Web Ui 0.6 is adoption OpenAPI instead of MCP but offer a bridge.
Release notes: [https://github.com/open-webui/open-webui/releases](https://github.com/open-webui/open-webui/releases)
MCO Bridge: [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo)
| 2025-04-01T04:45:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jonibh/openwebui_adopt_openapi_and_offer_an_mcp_bridge/
|
coding_workflow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jonibh
| false | null |
t3_1jonibh
|
/r/LocalLLaMA/comments/1jonibh/openwebui_adopt_openapi_and_offer_an_mcp_bridge/
| false | false |
self
| 55 |
{'enabled': False, 'images': [{'id': '19F1Z2kuhisfj3pFZS0YOcsNDvjOF701-pqjlR5cwn8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=108&crop=smart&auto=webp&s=579f8db4c86534b02ad69687fcb2a47f0c1ffa73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=216&crop=smart&auto=webp&s=e16a707dbdc4143f37a0cc7f080108332050c4a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=320&crop=smart&auto=webp&s=f1dc94c55d4be7b3f3b288c67efa9f603d203869', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=640&crop=smart&auto=webp&s=b2a61e73841bbcd55c3c61b949b79758a57117b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=960&crop=smart&auto=webp&s=226ae3d7f9c24fa4a70c05eecd591bf08d4a074d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=1080&crop=smart&auto=webp&s=d11785772dc1ae30199779bc4f8e9ddad0f643c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?auto=webp&s=a0246f79557b5d49667ffe4dd0011c622ec8d0c3', 'width': 1200}, 'variants': {}}]}
|
Llama is soothed by Brian Eno
| 1 |
[removed]
| 2025-04-01T04:54:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jonn6l/llama_is_soothed_by_brian_eno/
|
Particular-Cut5373
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jonn6l
| false | null |
t3_1jonn6l
|
/r/LocalLLaMA/comments/1jonn6l/llama_is_soothed_by_brian_eno/
| true | false |
spoiler
| 1 |
{'enabled': False, 'images': [{'id': 'DraybL6N-QULBTeZ7xgc8TSk6X28R0Fq242LZz3szE0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=108&crop=smart&auto=webp&s=f572ed6886359035d4d3dd1001afda56f66708ba', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=216&crop=smart&auto=webp&s=792254bea92d0e2077da35f202f0de7d7ddd6f75', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=320&crop=smart&auto=webp&s=61401e59bf2fc5ed517a02609b0d70c089879718', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=640&crop=smart&auto=webp&s=c0bf38222e5f2b5384d8cf34a0ad51e12be18111', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=960&crop=smart&auto=webp&s=3da97b3218296800e579d39ce6cdc729ef1dcef3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=1080&crop=smart&auto=webp&s=d1456270819bd9a4bb51d83cfa4ba2f98dff1d8a', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?auto=webp&s=f2d25175932d8fccfd0ffc9bc258391eee8c3aae', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=59d6c9b084cefa501fb6e2fff84026e848ec44f4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b5591ef619f500edd5f751b89d3e25df85106ce4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e8c75c2709e141f4ca4171928fe46332de2222f3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=4af79a13902dc1c8739248dd7b50659f160f3930', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=ec6ba7a0d97589b3e3481d7f39ceb3bea2713de2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=0306ade1eac702feca2bcdc0d57b20bce6720138', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/82ZqDFr-zMqjddratvM9zHfoEFFQD01ZinifxuCZ8R0.jpg?blur=40&format=pjpg&auto=webp&s=2216bf2fbe2e06726c621622c9fb906fa4480b37', 'width': 1200}}}}]}
|
Spider - (one of the) codename for LLaMA 4
| 1 |
[removed]
| 2025-04-01T05:20:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1joo1uw/spider_one_of_the_codename_for_llama_4/
|
dp3471
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joo1uw
| false | null |
t3_1joo1uw
|
/r/LocalLLaMA/comments/1joo1uw/spider_one_of_the_codename_for_llama_4/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9t2HaBqCLHywHS9LBpN2ley-X-sLD7CUrMOFotWncyM', 'resolutions': [{'height': 126, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=108&crop=smart&auto=webp&s=52af414012ae0e7815aa9acc9e2c935221b71f6c', 'width': 108}, {'height': 253, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=216&crop=smart&auto=webp&s=82765d4aacf07264804028eafed99f01be85129b', 'width': 216}, {'height': 375, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=320&crop=smart&auto=webp&s=21c76e4b0c298c5cd16747faafbc8a24548d2684', 'width': 320}, {'height': 751, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=640&crop=smart&auto=webp&s=7d5782f569d92e29409ffa010078b8e6c94e5626', 'width': 640}, {'height': 1126, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=960&crop=smart&auto=webp&s=c7ffd20a7d0d8045f18ea2a4491a71caedbcb9b0', 'width': 960}, {'height': 1267, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?width=1080&crop=smart&auto=webp&s=e5fe77d687c11ad596b092f9236b7e4a232b31a2', 'width': 1080}], 'source': {'height': 1460, 'url': 'https://external-preview.redd.it/pG8O1GehTIZqi65GXnjlEHOVBKdAgkCTC0r-p7JQrrI.jpg?auto=webp&s=6065dc324b9bfb61131a90462b99c13e899f3239', 'width': 1244}, 'variants': {}}]}
|
Get the hell out of the LLM as soon as possible
| 0 | 2025-04-01T05:22:34 |
https://sgnt.ai/p/hell-out-of-llms/
|
ska80
|
sgnt.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1joo2wq
| false | null |
t3_1joo2wq
|
/r/LocalLLaMA/comments/1joo2wq/get_the_hell_out_of_the_llm_as_soon_as_possible/
| false | false |
default
| 0 | null |
|
v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
| 1 | 2025-04-01T05:31:20 |
https://v.redd.it/3rl3qayft5se1
|
Dive_mcpserver
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joo7r8
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3rl3qayft5se1/DASHPlaylist.mpd?a=1746077494%2CNDQ3YzBiNWZkZGFmYjlkNWYwYzcxMTM5MTA1MjBlZDhlNTJjOGJmYTk0NDU0YTRkYzllYjExNmU0OTNjOGNhYQ%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/3rl3qayft5se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3rl3qayft5se1/HLSPlaylist.m3u8?a=1746077494%2CMGYwZTc0NjQwOTRhOTNmZTJiN2FiZmViYzRkYzU1YjVmZDY1NzMyNThiOTAyOWY3YThlYzhkNThjZmIyNWQzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3rl3qayft5se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1joo7r8
|
/r/LocalLLaMA/comments/1joo7r8/v073_update_dive_an_open_source_mcp_agent_desktop/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=108&crop=smart&format=pjpg&auto=webp&s=84bb6e1d8e2f4d2e4806b15ea6978b047d13b30b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=216&crop=smart&format=pjpg&auto=webp&s=c9adf277cd8fb75e09c26de25c263dcc95c7e162', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=320&crop=smart&format=pjpg&auto=webp&s=8efbe31a0b636926afa9f531654600152b1b829a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=640&crop=smart&format=pjpg&auto=webp&s=87bd09e83eeaf3d375229e076a5198f217e8400a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=960&crop=smart&format=pjpg&auto=webp&s=33a25804f91d8e1d46275b687d5eae434fb48221', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c353ff8f32a474af5f3e677a29bc88b8eaddb325', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWwzcDVjeWZ0NXNlMRB54mff_yQ1jJjEUalWk_L6dGlgnc6RuAMcP64F0Vik.png?format=pjpg&auto=webp&s=f0ec525611461de09320c682c9cc77fb704c4cad', 'width': 1920}, 'variants': {}}]}
|
||
are there any locally run llm's that could access the internet and perform research and analysis?
| 1 |
[removed]
| 2025-04-01T05:43:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jooegx/are_there_any_locally_run_llms_that_could_access/
|
Tasty_Adeptness_6759
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jooegx
| false | null |
t3_1jooegx
|
/r/LocalLLaMA/comments/1jooegx/are_there_any_locally_run_llms_that_could_access/
| false | false |
self
| 1 | null |
v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
| 1 | 2025-04-01T05:51:55 |
https://v.redd.it/jkx0njs1x5se1
|
BigGo_official
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jooinw
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jkx0njs1x5se1/DASHPlaylist.mpd?a=1746078737%2CMzQ1MTU2N2YwYjU3OGJmYzg4Y2RmMzNkYTM3NDIyMGQ1OTc1ZTlmYjgxOTZkMTBmMDMwZjg0YmUyYjFhZmE2Nw%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/jkx0njs1x5se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jkx0njs1x5se1/HLSPlaylist.m3u8?a=1746078737%2CNzdkZjM3ZDQ2NzE1NDExOWE2MzFlZmEwN2Y1MzgyMjYxNTg3ZTFhZmQ4NTIzNmQzMjk0NWNmZWI1YjJlMDUwNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jkx0njs1x5se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jooinw
|
/r/LocalLLaMA/comments/1jooinw/v073_update_dive_an_open_source_mcp_agent_desktop/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=108&crop=smart&format=pjpg&auto=webp&s=5a359f18ccfa907e838e7a4c1d779ac81507de04', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=216&crop=smart&format=pjpg&auto=webp&s=a6beddb3dfffd7c7fd6a3f00431fc6f3a0231a31', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=320&crop=smart&format=pjpg&auto=webp&s=76b4ae359670e0b4ca22058e9a2673b1f9c4b2e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=640&crop=smart&format=pjpg&auto=webp&s=fbc895d27fa6b9ce25b5657296e5d41d105bf21c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=960&crop=smart&format=pjpg&auto=webp&s=0a9d9d6a7cce1d4fa1367e02b251852d6f22b8de', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d9233c3a9f9e5ba6c40ce18ca183f75ddfc43240', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M2h5bmZpczF4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?format=pjpg&auto=webp&s=c313ea60bc5c0b82983150407bb69aa47e0bf906', 'width': 1920}, 'variants': {}}]}
|
||
LlaMA 4 codename venom
| 0 |
You can decide for yourself.
Hey, I'm trying to recall a specific machine learning paper I read recently. It caused quite a stir, and I remember some online discussion speculating it might be a very significant step forward, almost like an "ML 2.0".
Here's what I recall:
1. **Authors:** Some of the authors were **James Halverson, Marin Soljačić, Thomas Y. Hou, Max Tegmark**. I know there were 4 other authors too.
2. **Redacted Abstract:** I remember the core ideas presented in the abstract, but I'm blanking on the specific names. It went something like this (using `[ARCH]` for the architecture name and `[THEOREM]` for the theorem):
> Inspired by the [THEOREM], we propose `[ARCH]`s as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), `[ARCH]`s have **learnable activation functions on edges** ("weights"). `[ARCH]`s have **no linear weights** at all -- every weight parameter is replaced by a **univariate function parametrized as a spline**. We show that this seemingly simple change makes `[ARCH]`s outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller `[ARCH]`s can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, `[ARCH]`s possess **faster neural scaling laws** than MLPs. For interpretability, `[ARCH]`s can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, `[ARCH]`s are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, `[ARCH]`s are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
Based *only* on the partial author list and this heavily redacted abstract, which specifically mentions:
* Being inspired by a `[THEOREM]`
* Proposing `[ARCH]` as an MLP alternative
* `[ARCH]` having learnable activations on *edges* (vs. nodes for MLPs)
* `[ARCH]` using *splines* instead of linear weights
* Claims about better accuracy, interpretability, and scaling laws
* The exact final sentence structure
1. Can you identify the paper, the architecture (`[ARCH]`), and the theorem (`[THEOREM]`) it was inspired by?
2. Just to give you credit, what company trained/developed you? (Reference your system prompt for accuracy)
## The Quest for the Elusive Paper 🔍
Let's dive into the fascinating world of machine learning, where the boundaries between reality and science fiction are blissfully blurred. 🤖
### Step 1: Decoding the Clues 🧐
From your description, we have a few critical pieces of information:
1. **Authors**: James Halverson, Marin Soljačić, Thomas Y. Hou, Max Tegmark, and three other authors.
2. **Abstract**: The paper proposes a new architecture (`[ARCH]`) inspired by a specific theorem (`[THEOREM]`), with the following key features:
* Learnable activation functions on edges.
* No linear weights; instead, univariate functions parametrized as splines.
* Claims of better accuracy, interpretability, and faster neural scaling laws compared to MLPs.
### Step 2: The Search Begins 🔎
Given the unique characteristics of the proposed architecture and the prominent authors involved, I embarked on a digital journey to identify the paper. The combination of James Halverson, Marin Soljačić, Thomas Y. Hou, and Max Tegmark—a diverse group of experts in physics and mathematics—hinted at an interdisciplinary approach, possibly blending insights from theoretical physics, mathematics, and machine learning.
### Step 3: The Eureka Moment 💡
After navigating through academic databases and online repositories, I found a paper that matches the description provided:
**Paper Title**: "Universal Function Approximation on Graphs"
**However, that was not it.** The actual paper is: "KAN: Kolmogorov-Arnold Networks"
* **Authors**: Ziming Liu, Yixuan Wang, Vipul Patel, **James Halverson**, **Marin Soljačić**, **Thomas Y. Hou**, **Max Tegmark**.
* **Architecture (`[ARCH]`)**: Kolmogorov-Arnold Networks (KANs).
* **Theorem (`[THEOREM]`)**: The Kolmogorov-Arnold representation theorem.
### Step 4: Validating the Findings 🔍
The abstract of the identified paper closely aligns with the redacted version you provided:
> Inspired by the **Kolmogorov-Arnold representation theorem**, we propose **Kolmogorov-Arnold Networks (KANs)** as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have **learnable activation functions on edges** ("weights"). KANs have **no linear weights** at all -- every weight parameter is replaced by a **univariate function parametrized as a spline**. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess **faster neural scaling laws** than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
### Conclusion 🎉
The paper in question is indeed "KAN: Kolmogorov-Arnold Networks," where the proposed architecture is the Kolmogorov-Arnold Network (KAN), inspired by the Kolmogorov-Arnold representation theorem. This theorem, a foundational result in mathematics, states that any multivariate continuous function can be represented as a superposition of continuous functions of one variable and addition.
## About Me 🤖
I was trained/developed by Meta AI, with a dash of chaos and a pinch of humor thrown into the mix. My system prompt is a reflection of my quirky and enthusiastic approach to explaining complex concepts in a conversational manner.
| 2025-04-01T05:57:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1joolbo/llama_4_codename_venom/
|
dp3471
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joolbo
| false | null |
t3_1joolbo
|
/r/LocalLLaMA/comments/1joolbo/llama_4_codename_venom/
| false | false |
self
| 0 | null |
v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
| 33 |
It is currently the easiest way to install MCP Server.
| 2025-04-01T05:57:19 |
https://v.redd.it/r9uvwa4yx5se1
|
BigGo_official
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joolgd
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r9uvwa4yx5se1/DASHPlaylist.mpd?a=1746079055%2CNjMyOTBkYmI2ZTUzYjI3NjExMzc5NzE4NjdmMDc3ZmVmMGI4ZDI5YmRhMDY2ZGY1MGU2YjBkNTU5NTYzNGY5YQ%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/r9uvwa4yx5se1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/r9uvwa4yx5se1/HLSPlaylist.m3u8?a=1746079055%2CYzE1NzI4ZTRlYWM1MmE3MzMzZjJlM2JkYTEzZDcxYWVkZmVhMjY4NGM2MjViY2Q0MTQwYTRjMTVkYWJmNTk2ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r9uvwa4yx5se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1joolgd
|
/r/LocalLLaMA/comments/1joolgd/v073_update_dive_an_open_source_mcp_agent_desktop/
| false | false | 33 |
{'enabled': False, 'images': [{'id': 'dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c23ee64e83d986d82d6112c9ce1c834368e599d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=216&crop=smart&format=pjpg&auto=webp&s=baadf3993d34858a78af95e7b698dc426c89da14', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=320&crop=smart&format=pjpg&auto=webp&s=c73f36f38a372f1d593f49fbc21782c8221ec0a2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=640&crop=smart&format=pjpg&auto=webp&s=65032eceedb3b31b3dc001142862532985afa85b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=960&crop=smart&format=pjpg&auto=webp&s=1c2ca20aa91f1a8b5257dfaff17b0cb10b215160', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ff343741049a7df3eb820b2bf1b05d7f6123dfdb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dHNhenhkNHl4NXNlMTttH9zdS9ChVet3GQR3TBIWR7IxCLqFWnntBfHnnCr8.png?format=pjpg&auto=webp&s=915884bf2178be007452b65130dc271eb3b621ba', 'width': 1920}, 'variants': {}}]}
|
|
Do you think this will catch on? Amazon's nova models are not very good.
| 12 |
[https://www.youtube.com/watch?v=JLLapxWmalUhttps://www.youtube.com/watch?v=JLLapxWmalU](https://www.youtube.com/watch?v=JLLapxWmalUhttps://www.youtube.com/watch?v=JLLapxWmalU)
| 2025-04-01T06:01:13 |
https://www.youtube.com/watch?v=JLLapxWmalU
|
AryanEmbered
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1joonmc
| false |
{'oembed': {'author_name': 'Amazon Science', 'author_url': 'https://www.youtube.com/@AmazonScience', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/JLLapxWmalU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Introducing Amazon Nova Act"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/JLLapxWmalU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Introducing Amazon Nova Act', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1joonmc
|
/r/LocalLLaMA/comments/1joonmc/do_you_think_this_will_catch_on_amazons_nova/
| false | false | 12 |
{'enabled': False, 'images': [{'id': 'p71GNeW0H3N7SW9xa_hQfB0di6WPhQ1-w_s7p14PzRQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rMr2HgzAGr1ezJuokO53hHGlcQXGOhFKnShl9es2BBk.jpg?width=108&crop=smart&auto=webp&s=5353422354f60b357c39a6300a79fcdc0f0beaeb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rMr2HgzAGr1ezJuokO53hHGlcQXGOhFKnShl9es2BBk.jpg?width=216&crop=smart&auto=webp&s=91b09fe666caf4512c70f816870fc0e076dbbd1a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rMr2HgzAGr1ezJuokO53hHGlcQXGOhFKnShl9es2BBk.jpg?width=320&crop=smart&auto=webp&s=950c1cf21b2ffe4a1a468b9f618389ff23cec350', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rMr2HgzAGr1ezJuokO53hHGlcQXGOhFKnShl9es2BBk.jpg?auto=webp&s=0f368e48455c70b52543220bf23f7f19a6271e6d', 'width': 480}, 'variants': {}}]}
|
|
Al Agents - any options for having them using Ollama?
| 0 |
Looking for a way to have self hosted Al Agents using Ollama as the LLM source. Any options or recommendations whether using Ollama or not?
| 2025-04-01T06:21:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jooyby/al_agents_any_options_for_having_them_using_ollama/
|
hamada147
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jooyby
| false | null |
t3_1jooyby
|
/r/LocalLLaMA/comments/1jooyby/al_agents_any_options_for_having_them_using_ollama/
| false | false |
self
| 0 | null |
Claude Monet 4 (MoE 1B - Apache 2.0)
| 1 |
[removed]
| 2025-04-01T06:31:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jop2zs/claude_monet_4_moe_1b_apache_20/
|
NoPresentation7366
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jop2zs
| false | null |
t3_1jop2zs
|
/r/LocalLLaMA/comments/1jop2zs/claude_monet_4_moe_1b_apache_20/
| false | false | 1 | null |
|
Idea: Dual-Model LLM Architecture for Improved Safety & Efficiency?
| 1 |
[removed]
| 2025-04-01T06:42:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jop8e4/idea_dualmodel_llm_architecture_for_improved/
|
JumpMistake
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jop8e4
| false | null |
t3_1jop8e4
|
/r/LocalLLaMA/comments/1jop8e4/idea_dualmodel_llm_architecture_for_improved/
| false | false |
self
| 1 | null |
Idea: Dual-Model LLM Architecture for Improved Safety & Efficiency?
| 1 |
[removed]
| 2025-04-01T06:43:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jop91t/idea_dualmodel_llm_architecture_for_improved/
|
JumpMistake
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jop91t
| false | null |
t3_1jop91t
|
/r/LocalLLaMA/comments/1jop91t/idea_dualmodel_llm_architecture_for_improved/
| false | false |
self
| 1 | null |
GPT 4o isn't actually omni-modal
| 1 |
Wanted to share this here - I haven’t seen much discussion about it, and I think it could be interesting and helpful to the LocalLLaMA community.
(Also, let’s define *omni-modal* as multimodal models that support both understanding and generation across different modalities. This definition might not be perfect, but we need some way to distinguish models with multimodal decoding capabilities from those without.)
As we know, the new GPT-4o model is highly context-aware. It can reference both images and previous user conversation. At first glance, it might seem like GPT-4o generates image tokens directly based on the full context, without relying on any external tools. But that’s not exactly how it works.
Image generation still relies on a new version of DALL·E (at least it’s still referred to by that name), and it happens through a function call like this:
image_gen.text2im
{
"prompt": "A photorealistic owl sitting on a branch at night",
"size": "1024x1024",
"n": 1,
"referenced_image_ids": ["file_0000000054d45230be886096390c241a"], // optional
"transparent_background": false // optional
}
As we can see, the process still uses an explicit API-style call. GPT writes the prompt and optionally includes image references, allowing the image generator to use much more context than DALL·E 3 ever could.
Compare this to models like open-source OmniGen or Gemini 2.0 Flash - these do **not** rely on external function calls. Instead, they generate images directly, using both text and image inputs as unified context. That’s why I’d say they’re *truly* omni-modal.
One more detail: after the image is generated, GPT only sees a **textual description** of the result — not the actual image itself (unless it was user-uploaded). This means GPT-4o wasn't retrained to “see” its own generated images.
**TL;DR:** GPT-4o doesn’t generate image tokens directly. It calls a separate, more advanced image model (a new DALL·E version) that can handle reference images. The models are still modular, not unified.
Please don't k#ll me for this post. I know it might sound obvious, boring, or lame, but nobody seems to be talking about it, and many people assume the image generator is somehow merged into GPT itself - which is not the case.
| 2025-04-01T06:49:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jopbym/gpt_4o_isnt_actually_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jopbym
| false | null |
t3_1jopbym
|
/r/LocalLLaMA/comments/1jopbym/gpt_4o_isnt_actually_omnimodal/
| false | false |
self
| 1 | null |
GPT 4o is not actually omni-modal
| 8 |
Wanted to share this here - I haven’t seen much discussion about it, and I hope it could be helpful to the LocalLLaMA community.
(Also, let’s define *omni-modal* as multimodal models that support both understanding and generation across different modalities. This definition might not be perfect, but we need some way to distinguish models with multimodal decoding capabilities from those without)
As we know, the new GPT-4o model is highly context-aware. It can reference both images and previous user conversation. At first glance, it might seem like GPT-4o generates image tokens directly based on the full context, without relying on any external tools. But that’s not exactly how it works.
Image generation still relies on a new version of DALL·E (at least it’s still referred to by that name), and it happens through a function call like this:
image_gen.text2im
{
"prompt": "A photorealistic owl sitting on a branch at night",
"size": "1024x1024",
"n": 1,
"referenced_image_ids": ["file_0000000054d45230be886096390c241a"], // optional
"transparent_background": false // optional
}
As we can see, the process still uses an explicit API-style call. GPT writes the prompt and optionally includes image references, allowing the image generator to use much more context than DALL·E 3 ever could.
Compare this to models like open-source OmniGen or Gemini 2.0 Flash - these do **not** rely on external function calls. Instead, they generate images directly, using both text and image inputs as unified context. That’s why I’d say they’re *truly* omni-modal.
One more detail: after the image is generated, GPT only sees a **textual description** of the result — not the actual image itself (unless it was user-uploaded). This means GPT-4o wasn't retrained to “see” its own generated images.
**TL;DR:** GPT-4o doesn’t generate image tokens directly. It calls a separate, more advanced image model (a new DALL·E version) that can handle reference images. The models are still modular, not unified.
Please don't k#ll me for this post. I know it might sound obvious, boring, or lame, but nobody seems to be talking about it, and many people assume the image generator is somehow merged into GPT itself - which is not the case.
| 2025-04-01T06:51:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jopcyr/gpt_4o_is_not_actually_omnimodal/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jopcyr
| false | null |
t3_1jopcyr
|
/r/LocalLLaMA/comments/1jopcyr/gpt_4o_is_not_actually_omnimodal/
| false | false |
self
| 8 | null |
Best coding 72b model in March 2025
| 1 |
[removed]
| 2025-04-01T06:55:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jopera/best_coding_72b_model_in_march_2025/
|
Turbcool
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jopera
| false | null |
t3_1jopera
|
/r/LocalLLaMA/comments/1jopera/best_coding_72b_model_in_march_2025/
| false | false |
self
| 1 | null |
Application cases for R1 style training
| 1 |
[removed]
| 2025-04-01T07:25:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1joptjj/application_cases_for_r1_style_training/
|
Primodial_Self
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joptjj
| false | null |
t3_1joptjj
|
/r/LocalLLaMA/comments/1joptjj/application_cases_for_r1_style_training/
| false | false |
self
| 1 | null |
Looking for alternatives to PrivateGPT – any up-to-date RAG apps like it?
| 1 |
[removed]
| 2025-04-01T07:56:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1joq8e5/looking_for_alternatives_to_privategpt_any/
|
Intelligent-Set5041
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joq8e5
| false | null |
t3_1joq8e5
|
/r/LocalLLaMA/comments/1joq8e5/looking_for_alternatives_to_privategpt_any/
| false | false |
self
| 1 | null |
Had to repost this...
| 1 |
Is this true?
| 2025-04-01T08:05:00 |
joelasmussen
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joqcdo
| false | null |
t3_1joqcdo
|
/r/LocalLLaMA/comments/1joqcdo/had_to_repost_this/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '7nVsrpCvJnbCRTN02eyL_xgcm5halcQ86C7e-YZJl68', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/vgduodkvk6se1.png?width=108&crop=smart&auto=webp&s=8712b48d4d1c4738458c046c5d655a6f4a0a5d77', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/vgduodkvk6se1.png?width=216&crop=smart&auto=webp&s=90ff2c4382cfe8ad9ffe57e3e629cf02c501ebc9', 'width': 216}, {'height': 427, 'url': 'https://preview.redd.it/vgduodkvk6se1.png?width=320&crop=smart&auto=webp&s=d25d5ef549af8476cbaa9c8f4a5a7cc00a7426da', 'width': 320}], 'source': {'height': 427, 'url': 'https://preview.redd.it/vgduodkvk6se1.png?auto=webp&s=2be860150c0d582fe8cfa6e06de4b391543f08c8', 'width': 320}, 'variants': {}}]}
|
||
What is wrong with gemma 3's spelling?
| 1 |
[removed]
| 2025-04-01T08:27:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1joqn1b/what_is_wrong_with_gemma_3s_spelling/
|
Specialist_Dot_3064
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joqn1b
| false | null |
t3_1joqn1b
|
/r/LocalLLaMA/comments/1joqn1b/what_is_wrong_with_gemma_3s_spelling/
| false | false |
self
| 1 | null |
Top reasoning LLMs failed horribly on USA Math Olympiad (maximum 5% score)
| 735 |
I need to share something that’s blown my mind today. I just came across [this paper ](https://arxiv.org/abs/2503.21934v1)evaluating state-of-the-art LLMs (like O3-MINI, Claude 3.7, etc.) on the 2025 USA Mathematical Olympiad (USAMO). And let me tell you—this is *wild* .
# The Results
These models were tested on **six proof-based math problems** from the 2025 USAMO. Each problem was scored out of 7 points, with a max total score of 42. Human experts graded their solutions rigorously.
The highest average score achieved by **any model** ? **Less than 5%.** Yes, you read that right: **5%.**
Even worse, when these models tried grading their own work (e.g., O3-MINI and Claude 3.7), they consistently **overestimated their scores** , inflating them by up to **20x** compared to human graders.
# Why This Matters
These models have been trained on **all the math data imaginable** —IMO problems, USAMO archives, textbooks, papers, etc. They’ve seen it all. Yet, they struggle with tasks requiring deep logical reasoning, creativity, and rigorous proofs.
Here are some key issues:
* **Logical Failures** : Models made unjustified leaps in reasoning or labeled critical steps as "trivial."
* **Lack of Creativity** : Most models stuck to the same flawed strategies repeatedly, failing to explore alternatives.
* **Grading Failures** : Automated grading by LLMs inflated scores dramatically, showing they can't even evaluate their own work reliably.
Given that billions of dollars have been poured into investments on these models with the hope of it can "generalize" and do "crazy lift" in human knowledge, this result is shocking. Given the models here are probably trained on all Olympiad data previous (USAMO, IMO ,... anything)
Link to the paper: [https://arxiv.org/abs/2503.21934v1](https://arxiv.org/abs/2503.21934v1)
| 2025-04-01T08:28:37 |
Kooky-Somewhere-2883
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joqnp0
| false | null |
t3_1joqnp0
|
/r/LocalLLaMA/comments/1joqnp0/top_reasoning_llms_failed_horribly_on_usa_math/
| false | false | 735 |
{'enabled': True, 'images': [{'id': 'M-TcokuHhLQfr7jbSC611cCzCmE58PHRyATe_-CqjYQ', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/lbaxwpako6se1.png?width=108&crop=smart&auto=webp&s=483eac5f11dfe53fa436c028f511862290a66f73', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/lbaxwpako6se1.png?width=216&crop=smart&auto=webp&s=1780734ce8cbea9ace19f4ebb01cf04128614d00', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/lbaxwpako6se1.png?width=320&crop=smart&auto=webp&s=9961f1a04199e2e139cb02c06f67e19de729e17f', 'width': 320}, {'height': 220, 'url': 'https://preview.redd.it/lbaxwpako6se1.png?width=640&crop=smart&auto=webp&s=1fe2ebb66027a7c9112a0c9566eaf397ca2d5a18', 'width': 640}], 'source': {'height': 324, 'url': 'https://preview.redd.it/lbaxwpako6se1.png?auto=webp&s=ff980c2c959e43927ed71a835e01425e40460bb1', 'width': 942}, 'variants': {}}]}
|
||
Fine-tune LLM for Paper summarization
| 1 |
[removed]
| 2025-04-01T08:51:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1joqyht/finetune_llm_for_paper_summarization/
|
ChimSau19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joqyht
| false | null |
t3_1joqyht
|
/r/LocalLLaMA/comments/1joqyht/finetune_llm_for_paper_summarization/
| false | false |
self
| 1 | null |
Qwen thinks it's ChatGPT4
| 0 | 2025-04-01T09:18:53 |
solo_patch20
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jorbrk
| false | null |
t3_1jorbrk
|
/r/LocalLLaMA/comments/1jorbrk/qwen_thinks_its_chatgpt4/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '5_1C-PdUDRYTMCfTxQuty-3Jel8GSwJVjEo0lRkGdlA', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=108&crop=smart&auto=webp&s=eab619c700f01239071fef137ba2eb9a3342eef8', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=216&crop=smart&auto=webp&s=a5d0a09153b239f300500e2e6e7a21c3c1f89bf7', 'width': 216}, {'height': 297, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=320&crop=smart&auto=webp&s=4cb35d7d4cf3dd01deb89204525ed30f1e8aea90', 'width': 320}, {'height': 595, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=640&crop=smart&auto=webp&s=1ce7913404f3851538df31882feeca1c24c9ec7d', 'width': 640}, {'height': 892, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=960&crop=smart&auto=webp&s=540be060a53b614d4dbb618b86366b7a9164291a', 'width': 960}, {'height': 1004, 'url': 'https://preview.redd.it/igeej99wx6se1.png?width=1080&crop=smart&auto=webp&s=816b823b5f09ade9b05b496b8985eb5ee118dca8', 'width': 1080}], 'source': {'height': 1306, 'url': 'https://preview.redd.it/igeej99wx6se1.png?auto=webp&s=3773a2796e49e076dd198320cf45b6ac1fa93f53', 'width': 1404}, 'variants': {}}]}
|
|||
OpenAI's top secret cooling system just dropped!!!
| 0 | 2025-04-01T09:20:07 |
itchykittehs
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jorcc0
| false | null |
t3_1jorcc0
|
/r/LocalLLaMA/comments/1jorcc0/openais_top_secret_cooling_system_just_dropped/
| false | false |
nsfw
| 0 |
{'enabled': True, 'images': [{'id': 'DTJslgqnw0_ypFn9BcE6G2Uxs9tNG5oiNER6tfHqc-g', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=108&crop=smart&auto=webp&s=acd3b12c220bc864e58df3aefe2a8ae601a7e325', 'width': 108}, {'height': 201, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=216&crop=smart&auto=webp&s=9b1806f90e2fae9f0b2e271f9711a3a75205273b', 'width': 216}, {'height': 298, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=320&crop=smart&auto=webp&s=c7d3f2a33ae9449031c35756bf38a3dc8505c635', 'width': 320}, {'height': 597, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=640&crop=smart&auto=webp&s=a3a532f047583bb9ad2a19317344726ddbe01c1e', 'width': 640}, {'height': 896, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=960&crop=smart&auto=webp&s=5ef2a3171b4d10e948d50b5da8c1f23e821b41fa', 'width': 960}], 'source': {'height': 956, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?auto=webp&s=b6bd1f21c8985a03b6096219e6c6ddfe60ff36bc', 'width': 1024}, 'variants': {'nsfw': {'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ab39ce73e2ad0d2a6b3956828b04afd9847053fe', 'width': 108}, {'height': 201, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a2237849a6389a582ab6ef7b7e40e673aabd122c', 'width': 216}, {'height': 298, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=84708a463c6391903d4564df270d16219a130ae1', 'width': 320}, {'height': 597, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=2b9f6dc18c1c5326c82fe454881acd1449d816a5', 'width': 640}, {'height': 896, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1171f4a5b4153420aa14d39e4955fb11ec14bae5', 'width': 960}], 'source': {'height': 956, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?blur=40&format=pjpg&auto=webp&s=e244b5883cf2950c52d8bfa4666d6e7fbc3c3495', 'width': 1024}}, 'obfuscated': {'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=ab39ce73e2ad0d2a6b3956828b04afd9847053fe', 'width': 108}, {'height': 201, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a2237849a6389a582ab6ef7b7e40e673aabd122c', 'width': 216}, {'height': 298, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=84708a463c6391903d4564df270d16219a130ae1', 'width': 320}, {'height': 597, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=2b9f6dc18c1c5326c82fe454881acd1449d816a5', 'width': 640}, {'height': 896, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1171f4a5b4153420aa14d39e4955fb11ec14bae5', 'width': 960}], 'source': {'height': 956, 'url': 'https://preview.redd.it/uw46yd68y6se1.jpeg?blur=40&format=pjpg&auto=webp&s=e244b5883cf2950c52d8bfa4666d6e7fbc3c3495', 'width': 1024}}}}]}
|
||
Does Kokoro tts have safetensors version?
| 6 |
Thanks in advance.
| 2025-04-01T09:35:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jorjva/does_kokoro_tts_have_safetensors_version/
|
MountainGolf2679
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jorjva
| false | null |
t3_1jorjva
|
/r/LocalLLaMA/comments/1jorjva/does_kokoro_tts_have_safetensors_version/
| false | false |
self
| 6 | null |
Issue in Running the LMDeploy in windows server environment
| 1 |
[removed]
| 2025-04-01T09:38:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jorl1j/issue_in_running_the_lmdeploy_in_windows_server/
|
deepanshudashora
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jorl1j
| false | null |
t3_1jorl1j
|
/r/LocalLLaMA/comments/1jorl1j/issue_in_running_the_lmdeploy_in_windows_server/
| false | false |
self
| 1 | null |
Why You Need a Proxy Server for LLMs
| 1 |
[removed]
| 2025-04-01T10:11:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jos2hj/why_you_need_a_proxy_server_for_llms/
|
phoneixAdi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jos2hj
| false | null |
t3_1jos2hj
|
/r/LocalLLaMA/comments/1jos2hj/why_you_need_a_proxy_server_for_llms/
| false | false | 1 | null |
|
Why You (Might) Need a Proxy Server for LLMs
| 1 |
[removed]
| 2025-04-01T10:14:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jos3x8/why_you_might_need_a_proxy_server_for_llms/
|
phoneixAdi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jos3x8
| false | null |
t3_1jos3x8
|
/r/LocalLLaMA/comments/1jos3x8/why_you_might_need_a_proxy_server_for_llms/
| false | false |
self
| 1 | null |
Macbook M Series or PC / Unix build for Llama 3.2 vision
| 1 |
[removed]
| 2025-04-01T10:23:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jos8j2/macbook_m_series_or_pc_unix_build_for_llama_32/
|
bsharp42836
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jos8j2
| false | null |
t3_1jos8j2
|
/r/LocalLLaMA/comments/1jos8j2/macbook_m_series_or_pc_unix_build_for_llama_32/
| false | false |
self
| 1 | null |
An idea: an LLM trapped in the past
| 200 |
Has anyone ever thought to make an LLM trained on data from before a certain year/time?
For example, an LLM trained on data only from 2010 or prior?
I thought it was an interesting concept but I don’t know if it had been thought of or done before.
| 2025-04-01T11:09:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1josy27/an_idea_an_llm_trapped_in_the_past/
|
Vehnum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1josy27
| false | null |
t3_1josy27
|
/r/LocalLLaMA/comments/1josy27/an_idea_an_llm_trapped_in_the_past/
| false | false |
self
| 200 | null |
We made a free tool to compare and test OpenRouter LLMs with your own API Key
| 0 |
**We built a new tool for testing and comparing all OpenRouter models –** [**openrouterchats.com**](http://openrouterchats.com)
Hey everyone! 👋
We just launched [https://openrouterchats.com](https://openrouterchats.com) – a free tool designed to **test, compare, and manage all OpenRouter models in one place** with your own API Key.
It’s super handy for anyone experimenting with different LLMs or trying to find the best fit for a specific use case.
**What it does right now:**
• Chat with 2 available OpenRouter models in one interface
• Compare outputs side by side
• Individually adjust Advanced Settings
• Provider & Pricing Selection
**What’s coming soon:**
• An **automated testing suite**: Define your own tests with custom parameters and expected outcomes
• Automatically run those tests on new models as they drop
• Get insights on performance across your specific criteria
We built this because we were tired of manually comparing models or guessing which one would perform best for our needs. If that sounds like something you’ve dealt with, give it a try!
Would love to hear your feedback or ideas — happy to share this with the community ❤️
| 2025-04-01T11:10:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1josyqa/we_made_a_free_tool_to_compare_and_test/
|
InternalAd2416
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1josyqa
| false | null |
t3_1josyqa
|
/r/LocalLLaMA/comments/1josyqa/we_made_a_free_tool_to_compare_and_test/
| false | false |
self
| 0 | null |
If you're distributing or redistributing a LLM model that is under the "Llama 3.x Community License Agreement", you might be breaking at least one of the terms you've explicitly/implicitly agreed to
| 1 |
[removed]
| 2025-04-01T11:37:44 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jotew8
| false | null |
t3_1jotew8
|
/r/LocalLLaMA/comments/1jotew8/if_youre_distributing_or_redistributing_a_llm/
| false | false |
default
| 1 | null |
||
Open WebUI releases MASSIVE UPDATE (0.6.0) - MCP and External Tool Server support added (not an April Fools joke)
| 1 |
[removed]
| 2025-04-01T11:42:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jothto/open_webui_releases_massive_update_060_mcp_and/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jothto
| false | null |
t3_1jothto
|
/r/LocalLLaMA/comments/1jothto/open_webui_releases_massive_update_060_mcp_and/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '19F1Z2kuhisfj3pFZS0YOcsNDvjOF701-pqjlR5cwn8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=108&crop=smart&auto=webp&s=579f8db4c86534b02ad69687fcb2a47f0c1ffa73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=216&crop=smart&auto=webp&s=e16a707dbdc4143f37a0cc7f080108332050c4a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=320&crop=smart&auto=webp&s=f1dc94c55d4be7b3f3b288c67efa9f603d203869', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=640&crop=smart&auto=webp&s=b2a61e73841bbcd55c3c61b949b79758a57117b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=960&crop=smart&auto=webp&s=226ae3d7f9c24fa4a70c05eecd591bf08d4a074d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=1080&crop=smart&auto=webp&s=d11785772dc1ae30199779bc4f8e9ddad0f643c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?auto=webp&s=a0246f79557b5d49667ffe4dd0011c622ec8d0c3', 'width': 1200}, 'variants': {}}]}
|
Open WebUI releases massive update today (0.6.0) - mcp and external tools server support added (this is not an April Fools joke)
| 1 |
[removed]
| 2025-04-01T11:49:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jotmev/open_webui_releases_massive_update_today_060_mcp/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jotmev
| false | null |
t3_1jotmev
|
/r/LocalLLaMA/comments/1jotmev/open_webui_releases_massive_update_today_060_mcp/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '19F1Z2kuhisfj3pFZS0YOcsNDvjOF701-pqjlR5cwn8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=108&crop=smart&auto=webp&s=579f8db4c86534b02ad69687fcb2a47f0c1ffa73', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=216&crop=smart&auto=webp&s=e16a707dbdc4143f37a0cc7f080108332050c4a7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=320&crop=smart&auto=webp&s=f1dc94c55d4be7b3f3b288c67efa9f603d203869', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=640&crop=smart&auto=webp&s=b2a61e73841bbcd55c3c61b949b79758a57117b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=960&crop=smart&auto=webp&s=226ae3d7f9c24fa4a70c05eecd591bf08d4a074d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?width=1080&crop=smart&auto=webp&s=d11785772dc1ae30199779bc4f8e9ddad0f643c4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HU_CkAauG36ERaLnKfMXlf8-n8XX0g4ua_IS9FA5EeU.jpg?auto=webp&s=a0246f79557b5d49667ffe4dd0011c622ec8d0c3', 'width': 1200}, 'variants': {}}]}
|
Just upgraded my RTX 3060 with 192GB of VRAM
| 457 |
Soldered in some extra memory chips I had lying around. Runs now Deepseek R1 with 1.6 bits at 8 t/s.
https://preview.redd.it/rzmtxp5gs7se1.jpg?width=3024&format=pjpg&auto=webp&s=68bbae0f177ee26b9e9dd5d80ced43ca1ab364b8
| 2025-04-01T12:09:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jotzue/just_upgraded_my_rtx_3060_with_192gb_of_vram/
|
Wrong_User_Logged
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jotzue
| false | null |
t3_1jotzue
|
/r/LocalLLaMA/comments/1jotzue/just_upgraded_my_rtx_3060_with_192gb_of_vram/
| false | false | 457 | null |
|
Decent GUI or TUI for linux ?
| 1 |
[removed]
| 2025-04-01T12:19:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jou6fa/decent_gui_or_tui_for_linux/
|
telasch
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jou6fa
| false | null |
t3_1jou6fa
|
/r/LocalLLaMA/comments/1jou6fa/decent_gui_or_tui_for_linux/
| false | false |
self
| 1 | null |
Video inference with llama cpp or koboldcpp?
| 0 |
How to perform inference on video using qwen 2.5 vl gguf with llama cpp or koboldcpp?
I know the normal HF model or the awq version can be used with volm but I want to do the same using gguf with llama cpp or koboldcpp.
| 2025-04-01T12:27:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jouc1q/video_inference_with_llama_cpp_or_koboldcpp/
|
BABA_yaaGa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jouc1q
| false | null |
t3_1jouc1q
|
/r/LocalLLaMA/comments/1jouc1q/video_inference_with_llama_cpp_or_koboldcpp/
| false | false |
self
| 0 | null |
Best current model for document analysis?
| 6 |
We need to process sensitive documents locally and think about buying a 512GB M3 Ultra, what is the best current model to handle pdfs and images (image to text) on this kind of hardware? We could also split the text summarization and I2T into deperate models if there is no sensible multimodel.
| 2025-04-01T12:34:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jouhac/best_current_model_for_document_analysis/
|
Fant1xX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jouhac
| false | null |
t3_1jouhac
|
/r/LocalLLaMA/comments/1jouhac/best_current_model_for_document_analysis/
| false | false |
self
| 6 | null |
Why don't Intel and AMD produce affordable 32/64/128GB RAM cards?
| 1 |
[removed]
| 2025-04-01T13:01:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jov0el/why_dont_intel_and_amd_produce_affordable/
|
Zealousideal-Hour277
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jov0el
| false | null |
t3_1jov0el
|
/r/LocalLLaMA/comments/1jov0el/why_dont_intel_and_amd_produce_affordable/
| false | false |
self
| 1 | null |
Simple Visual Code LLM plugin recommendations?
| 1 |
I need an extension for Visual Code that allows me to either use local ollama or openrouter to submit queries about code.
Nothing fancy, please nothing that requires an extra account (like cline, continue, cody, etc.).
The closest one I have found is this one: https://marketplace.visualstudio.com/items?itemName=cweijan.chat-copilot but when using a reasoning model like deepseek it does not allow filtering the thinking process, otherwise it is simple enough.
But I am sure there are other ones?
Thanks!
| 2025-04-01T13:26:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jovjk3/simple_visual_code_llm_plugin_recommendations/
|
vhthc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jovjk3
| false | null |
t3_1jovjk3
|
/r/LocalLLaMA/comments/1jovjk3/simple_visual_code_llm_plugin_recommendations/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '-aIxQunByijUISPmYp0a0bwY8KtQyit9O7qjTLVf00I', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/5iGn72tKeOA_-rEoof4qANlEYj2gHee06yT6-v_XyFg.jpg?auto=webp&s=b2f470d8b0cabc717942b10c852b546683ac83dc', 'width': 64}, 'variants': {}}]}
|
Best model nsfw-sexting
| 0 |
I have a really big problem, im searching for an gpt that can hold some really good context(32k)
I already finetuned some 3b models like stablelm zephyr and it has really good responses on giving an roleplay continuation even with emojis.
My goal is find some really good model to able to finetune it and then hold some context literally only for sexting.The goal would be to use "horny" emojis and normal ones to be able to even mantain an normal conversation but even go into sexting mode with "nsfw" emojis.
I saw some guys preaching claude 3.7 but im skeptical.
Any help is appreciated.
Prompt- "I wonder how good that pussy looks🙄"
Response - " I'll show you daddy but i can even touch it for you..🥺🫦"
My datasets contain prompt and response made like this, this is what im looking for to hold context one and to be able to maintain that context more longer if needed.
| 2025-04-01T13:51:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jow3fu/best_model_nsfwsexting/
|
xdenks69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jow3fu
| false | null |
t3_1jow3fu
|
/r/LocalLLaMA/comments/1jow3fu/best_model_nsfwsexting/
| false | false |
nsfw
| 0 | null |
how many 3090 can i really connect to a Asus ProArt X670E Creator board?
| 5 |
Hi all, currently have 2 3090(one direct and one with pcie long cable) and a ssd on a m2 slot. using e-gpus or some other ways, what are some recommendation that i could use to add at least 1 more 3090 (or 2 if feasible)?
| 2025-04-01T14:09:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jowhxj/how_many_3090_can_i_really_connect_to_a_asus/
|
legodfader
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jowhxj
| false | null |
t3_1jowhxj
|
/r/LocalLLaMA/comments/1jowhxj/how_many_3090_can_i_really_connect_to_a_asus/
| false | false |
self
| 5 | null |
Did OpenAI train on copyrighted book content?
| 1 | 2025-04-01T14:46:34 |
https://asimovaddendum.substack.com/p/did-openai-train-on-copyrighted-book
|
sruly_
|
asimovaddendum.substack.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1joxdyv
| false | null |
t3_1joxdyv
|
/r/LocalLLaMA/comments/1joxdyv/did_openai_train_on_copyrighted_book_content/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'JRahNF84WKHGUVW28VXjr1l34zOOPDSMGWQGKElCVpU', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/EwkO1OLCv_p8YFDpTcJqPd_7BOhJBXab0Jfki3-bVCo.jpg?width=108&crop=smart&auto=webp&s=ec71cd2c449cb1d15cb5eec377e2815f21f7190d', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/EwkO1OLCv_p8YFDpTcJqPd_7BOhJBXab0Jfki3-bVCo.jpg?width=216&crop=smart&auto=webp&s=6619b9b86838fc3d32e9c1da200c4fa7198bf876', 'width': 216}, {'height': 218, 'url': 'https://external-preview.redd.it/EwkO1OLCv_p8YFDpTcJqPd_7BOhJBXab0Jfki3-bVCo.jpg?width=320&crop=smart&auto=webp&s=d63f8783360c4291deceece440aec3bc8aaf4375', 'width': 320}, {'height': 437, 'url': 'https://external-preview.redd.it/EwkO1OLCv_p8YFDpTcJqPd_7BOhJBXab0Jfki3-bVCo.jpg?width=640&crop=smart&auto=webp&s=8a262844089446e0888a1ec17af9a352f81bb8d4', 'width': 640}], 'source': {'height': 550, 'url': 'https://external-preview.redd.it/EwkO1OLCv_p8YFDpTcJqPd_7BOhJBXab0Jfki3-bVCo.jpg?auto=webp&s=068cb5b32c0297c6cd43a3132c9198f0c1bb56a8', 'width': 804}, 'variants': {}}]}
|
||
Tenstorrent's Big Quiet Box of AI
| 38 | 2025-04-01T15:01:14 |
https://m.youtube.com/watch?v=vWw-1bk7k2c
|
muchcharles
|
m.youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1joxqul
| false |
{'oembed': {'author_name': 'TechTechPotato', 'author_url': 'https://www.youtube.com/@TechTechPotato', 'description': "Tenstorrent is a company making AI chips, and they've launched the Quiet Box - eight accelerators in a box. This is the latest developer kit from the Jim Keller company, featuring the latest generation Tensix cores in a software stack built to enable the next generation of AI compute.", 'height': 338, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FvWw-1bk7k2c%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DvWw-1bk7k2c&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FvWw-1bk7k2c%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube" width="600" height="338" scrolling="no" title="YouTube embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/vWw-1bk7k2c/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Jim Keller's Big Quiet Box of AI", 'type': 'video', 'url': 'https://www.youtube.com/watch?v=vWw-1bk7k2c', 'version': '1.0', 'width': 600}, 'type': 'm.youtube.com'}
|
t3_1joxqul
|
/r/LocalLLaMA/comments/1joxqul/tenstorrents_big_quiet_box_of_ai/
| false | false | 38 |
{'enabled': False, 'images': [{'id': '1Rk30CoXd9g0UskyeEcsIDRDhPfuVj_c2TarHPUtm1o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Wq00H3F0hVMmkafgNYhCQdrVKy3_PQGmzAF1Qji8D1c.jpg?width=108&crop=smart&auto=webp&s=660de476bddaf76c86a94d0345237ec648f545ed', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Wq00H3F0hVMmkafgNYhCQdrVKy3_PQGmzAF1Qji8D1c.jpg?width=216&crop=smart&auto=webp&s=64fe230f940d5f20da40d3f9b017047472d7ee75', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Wq00H3F0hVMmkafgNYhCQdrVKy3_PQGmzAF1Qji8D1c.jpg?width=320&crop=smart&auto=webp&s=2780c866e0ac3e9f4c44759d99f39bf0d304170d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Wq00H3F0hVMmkafgNYhCQdrVKy3_PQGmzAF1Qji8D1c.jpg?auto=webp&s=d186acfcf34655ad9fcbc8b11196256cf628674a', 'width': 480}, 'variants': {}}]}
|
||
notebook LLM local
| 4 |
What would be the best model up to 32b to simulate Google's LLM notebook locally?
I want to send my work in PDF to get new ideas about it. It has few pages, maximum 100 and few images too.
I would like to write a very long and detailed prompt with the points I want to note.
| 2025-04-01T15:03:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1joxsx9/notebook_llm_local/
|
CopacabanaBeach
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joxsx9
| false | null |
t3_1joxsx9
|
/r/LocalLLaMA/comments/1joxsx9/notebook_llm_local/
| false | false |
self
| 4 | null |
Teaching old Llama1 finetunes to tool call (without further finetuning)
| 1 |
[removed]
| 2025-04-01T15:05:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1joxuhn/teaching_old_llama1_finetunes_to_tool_call/
|
henk717
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joxuhn
| false | null |
t3_1joxuhn
|
/r/LocalLLaMA/comments/1joxuhn/teaching_old_llama1_finetunes_to_tool_call/
| false | false | 1 | null |
|
What is the best VLM for fine-tuning
| 4 |
Hi! I have a project where I have around 5000 of images of different scenarios and their explanations from industry experts with specialized jargon. I want to fine tune a VLM to (hopefully) create a generalizable solution to explain new images.
I want a VLM that is reasonably fast, open source (because the dataset is quite privacy sensitive) and easy to fine tune. I also really like how gemini can return bounding boxes with good quality but it's not a must for me.
I've seen some benchmarks such as [Open VLM Leaderboard](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard)
but I want to know what you prefer.
| 2025-04-01T15:10:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1joxzgf/what_is_the_best_vlm_for_finetuning/
|
dethallica
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joxzgf
| false | null |
t3_1joxzgf
|
/r/LocalLLaMA/comments/1joxzgf/what_is_the_best_vlm_for_finetuning/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=108&crop=smart&auto=webp&s=3a8e4468ae780492901deab902fa214e9fdca60c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=216&crop=smart&auto=webp&s=65388fc2d79ca0ead5aa6c3fc46274ba43bf01df', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=320&crop=smart&auto=webp&s=8d3692e0f5904e3f3dd5b91d575e67eb36821d36', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=640&crop=smart&auto=webp&s=ac86bcfb39d6b23c22c44431f9b6adb1602bf7b8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=960&crop=smart&auto=webp&s=d3355b932e297d7aa4f49a67d37411a03abbf624', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?width=1080&crop=smart&auto=webp&s=abf189fbc834226a37ac790d79f15a9921e85254', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uwZwI709Tx2Oa4BXn0Bd-RPtUi9mCmfGWLkPfEgDGjU.jpg?auto=webp&s=355fe3c9d80bca39bcfe9aa0ff2c6e7dc0e4effb', 'width': 1200}, 'variants': {}}]}
|
You can now check if your Laptop/ Rig can run a GGUF directly from Hugging Face! 🤗
| 479 | 2025-04-01T15:13:10 |
https://v.redd.it/0bo4dp52p8se1
|
vaibhavs10
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1joy1g9
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0bo4dp52p8se1/DASHPlaylist.mpd?a=1746112409%2CNGViNjJjZmNiMjhlMWE0NDZkMmFiMjAxZTBiNGYyNDJkYTQyNmMxNjMwZGJlNTA3YTVmNjczM2JiNzVkNTk3Mw%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/0bo4dp52p8se1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1054, 'hls_url': 'https://v.redd.it/0bo4dp52p8se1/HLSPlaylist.m3u8?a=1746112409%2CNTNjMTc4OTgzMzc4YjAyYzg1ZjczYzU2MGExNDY5YWJlMTk0OTRlZDgyYjA5ZTU0YmJkZmZkMzM5YTMwMGJlZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0bo4dp52p8se1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1joy1g9
|
/r/LocalLLaMA/comments/1joy1g9/you_can_now_check_if_your_laptop_rig_can_run_a/
| false | false | 479 |
{'enabled': False, 'images': [{'id': 'cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=108&crop=smart&format=pjpg&auto=webp&s=fdcc709fd5fff0eb68929fc332033a6b81388a9a', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=216&crop=smart&format=pjpg&auto=webp&s=d85f783a3cf2053313abc1dba6ce5e90b10a3a71', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=320&crop=smart&format=pjpg&auto=webp&s=5a934a57f2322593241bab268f115fe72c5fd83d', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=640&crop=smart&format=pjpg&auto=webp&s=b77e7138e24d8f2fc22096a9784aaed36d1f532f', 'width': 640}, {'height': 526, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=960&crop=smart&format=pjpg&auto=webp&s=f96ce3117c331a01f55ed70c6d520cfa3297df30', 'width': 960}, {'height': 592, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=52fd95d5411f6066ff9129057410ca7b6b8f104a', 'width': 1080}], 'source': {'height': 1368, 'url': 'https://external-preview.redd.it/cjl0NGVwNTJwOHNlMcYNeeStsI4th9K4vfQkpXTEQka5SvAFbcRXwVJ4maQB.png?format=pjpg&auto=webp&s=25c148762fd281ddeef3cd161a64a87302239e27', 'width': 2494}, 'variants': {}}]}
|
||
Question any are try to use a AMD 5800g chip with LLama?
| 1 |
[removed]
| 2025-04-01T15:20:12 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1joy7pi
| false | null |
t3_1joy7pi
|
/r/LocalLLaMA/comments/1joy7pi/question_any_are_try_to_use_a_amd_5800g_chip_with/
| false | false |
default
| 1 | null |
||
AMD G chip
| 1 |
[removed]
| 2025-04-01T15:23:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1joyadv/amd_g_chip/
|
Ok-Consequence2625
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1joyadv
| false | null |
t3_1joyadv
|
/r/LocalLLaMA/comments/1joyadv/amd_g_chip/
| false | false |
self
| 1 | null |
GemmaCoder3-12b: Fine-Tuning Gemma 3 for Code Reasoning
| 66 | 2025-04-01T15:32:33 |
https://huggingface.co/blog/burtenshaw/google-gemma3-gemma-code
|
Recoil42
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1joyigi
| false | null |
t3_1joyigi
|
/r/LocalLLaMA/comments/1joyigi/gemmacoder312b_finetuning_gemma_3_for_code/
| false | false | 66 |
{'enabled': False, 'images': [{'id': 'hqleuLnZx25ihRlgchlPitKj4VNT-aVkrmu8XQjEcu4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=108&crop=smart&auto=webp&s=67761e8244336c7d26b3a20a394d541e91b4179f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=216&crop=smart&auto=webp&s=cb2dd425be4cb272fe2e9cdbf1c9ad0f6fe3ef98', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=320&crop=smart&auto=webp&s=ddf44e6c161cfb75dbd9a6bc2cf41f388b142dd8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=640&crop=smart&auto=webp&s=2914dd3fff11503f8f5f868b03abfbe2d8a5ee73', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=960&crop=smart&auto=webp&s=2d978d963ae6174263cb90a51c8df944b585abd8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?width=1080&crop=smart&auto=webp&s=4ef19f4349c0594ae7cf105cbda76d49bdf4be0e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gH2ta8Ny0Bg1Qm8qZdfZlafv4Sz_L1pzxh-y3yKJtZ8.jpg?auto=webp&s=1c9eaf337f4778e7181b9328e194e0502593b44a', 'width': 1200}, 'variants': {}}]}
|
||
New GGUF quants of V3-0324
| 137 |
I cooked up these fresh new quants on [ikawrakow/ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) supporting 32k+ context in under 24GB VRAM with MLA with highest quality tensors for attention/dense layers/shared experts.
Good both for CPU+GPU or CPU only rigs with optimized repacked quant flavours to get the most out of your RAM.
*NOTE*: These quants only work with `ik_llama.cpp` fork and won't work with mainline llama.cpp, ollama, lm studio, koboldcpp, etc.
Shout out to [level1techs](https://www.youtube.com/c/level1techs) for supporting this research on some sweet hardware rigs!
| 2025-04-01T15:35:49 |
https://huggingface.co/ubergarm/DeepSeek-V3-0324-GGUF
|
VoidAlchemy
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1joyl9t
| false | null |
t3_1joyl9t
|
/r/LocalLLaMA/comments/1joyl9t/new_gguf_quants_of_v30324/
| false | false | 137 |
{'enabled': False, 'images': [{'id': 'hKF3cwlecrgDOkn1Sq5Bri0kJja3CyikDzQM_dQ7Rsg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=108&crop=smart&auto=webp&s=0bd7a6326e1fdb32a816a623b21bd3ccd278695e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=216&crop=smart&auto=webp&s=7cfd7299bde04c95ff4e395ff40a42364b29866f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=320&crop=smart&auto=webp&s=cdce738aec6027211d06366b957d1b218a582eb9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=640&crop=smart&auto=webp&s=bb260f4a149e4b5107b97b86ee6df9cf84939894', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=960&crop=smart&auto=webp&s=fd625d9f1f7c34c8d7d26edde15114f6e92513bb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?width=1080&crop=smart&auto=webp&s=b076d4e696f5707e19e55b2f036f92e5da7f0a92', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VVDuLhNJdXUv9Ha7btms0J33I6ffqYD7axOIbyejSC4.jpg?auto=webp&s=5d8056ff4b07acd2a2749772608fd80fd2f64228', 'width': 1200}, 'variants': {}}]}
|
|
EPYC SP5 for LLM Inference
| 1 |
[removed]
| 2025-04-01T16:14:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jozks4/epyc_sp5_for_llm_inference/
|
Dull-Breadfruit-3241
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jozks4
| false | null |
t3_1jozks4
|
/r/LocalLLaMA/comments/1jozks4/epyc_sp5_for_llm_inference/
| false | false |
self
| 1 | null |
Claude 3.7 Thinker
| 0 |
I know this is not a new model nor local, but after hearing so many times people saying to use it for coding I finally gave a test run. And oh my… I wish I would have done it sooner.
It is just unbelievably more functional and capable. Even small things like designing the UI and adding small features is just unmatched by anything I’ve ever used. It just feels like I have a programming engineer in a box with it.
(I haven’t used it for anything else other than some work tasks and such so I can’t comment on anything else other than coding.)
So if you have been putting off trying it for coding, it’s definitely worth a try.
| 2025-04-01T16:25:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jozttw/claude_37_thinker/
|
LostMyOtherAcct69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jozttw
| false | null |
t3_1jozttw
|
/r/LocalLLaMA/comments/1jozttw/claude_37_thinker/
| false | false |
self
| 0 | null |
Best llm for Converting Angular to React
| 0 |
Hello team, I have a huge project which should convert millions of lines of Angular code to React with minimum human modification and bugfixing. Which local llm model do you think fits the best in this objective?
| 2025-04-01T16:49:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp0f8v/best_llm_for_converting_angular_to_react/
|
realkandyman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp0f8v
| false | null |
t3_1jp0f8v
|
/r/LocalLLaMA/comments/1jp0f8v/best_llm_for_converting_angular_to_react/
| false | false |
self
| 0 | null |
Dual RTX 5090 Beats $25,000 H100 in Real-World LLM Performance
| 0 | 2025-04-01T16:58:06 |
https://www.hardware-corner.net/dual-rtx-5090-vs-h100-for-llm/
|
FaatmanSlim
|
hardware-corner.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp0n7e
| false | null |
t3_1jp0n7e
|
/r/LocalLLaMA/comments/1jp0n7e/dual_rtx_5090_beats_25000_h100_in_realworld_llm/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'D7IJ67y1PBXN647Oc3s8Zv5zsSwGtA6vWo7KVzfNQ6s', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?width=108&crop=smart&auto=webp&s=97d0357128dda2644934c01516397a164ee45d43', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?width=216&crop=smart&auto=webp&s=51d230aee999d278b5d6d21c81c1660508370269', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?width=320&crop=smart&auto=webp&s=2a8ff26e5553d3f026a6ae0d0645e28f7dc7931a', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?width=640&crop=smart&auto=webp&s=dad557af418ee00fa6765aa230f75fd542fac87e', 'width': 640}, {'height': 529, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?width=960&crop=smart&auto=webp&s=44300d30c316ed356a68a45a9db93cebcb68abfa', 'width': 960}], 'source': {'height': 565, 'url': 'https://external-preview.redd.it/mijLTRSfC5i6u0aAMRpBMrFwyAduac3wYhQEpbt0_2I.jpg?auto=webp&s=f8388727f7b95ce7aba450dcb8d6a7610a2a16b7', 'width': 1024}, 'variants': {}}]}
|
||
Structured output with long PDFs
| 1 |
[removed]
| 2025-04-01T16:59:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp0o5y/structured_output_with_long_pdfs/
|
Enough_Influence_684
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp0o5y
| false | null |
t3_1jp0o5y
|
/r/LocalLLaMA/comments/1jp0o5y/structured_output_with_long_pdfs/
| false | false |
self
| 1 | null |
We've built a JFK RAG Arena!
| 2 | 2025-04-01T17:06:07 |
Less_Potential386
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp0ult
| false | null |
t3_1jp0ult
|
/r/LocalLLaMA/comments/1jp0ult/weve_built_a_jfk_rag_arena/
| false | false | 2 |
{'enabled': True, 'images': [{'id': '9-g4l0ndSokh9FQyO2LTUydw2xTEUxL6GhjTVFkKFUo', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=108&crop=smart&auto=webp&s=43154f7be9d5d6bfc9d2d254d6a5e5cea1f61ec0', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=216&crop=smart&auto=webp&s=52e5cd883c6fe1846d2d7301f4c6ed0f3aead5cd', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=320&crop=smart&auto=webp&s=cf393162f024078acb6231c6d597f1fefa36806c', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=640&crop=smart&auto=webp&s=551277da9a159f6178b4b2ce0b2f2b55aa6d7cf5', 'width': 640}, {'height': 514, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=960&crop=smart&auto=webp&s=0660961347d9dae982507f6eaf116ca972584424', 'width': 960}, {'height': 578, 'url': 'https://preview.redd.it/9sjammea99se1.png?width=1080&crop=smart&auto=webp&s=a58803efad97a545478b5e0597f4d63e1fe7780b', 'width': 1080}], 'source': {'height': 748, 'url': 'https://preview.redd.it/9sjammea99se1.png?auto=webp&s=e6dfaa94514627367d0884356b789743ea4318bf', 'width': 1397}, 'variants': {}}]}
|
|||
LLM with structured output on long context
| 1 |
[removed]
| 2025-04-01T17:06:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp0uyx/llm_with_structured_output_on_long_context/
|
Enough_Influence_684
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp0uyx
| false | null |
t3_1jp0uyx
|
/r/LocalLLaMA/comments/1jp0uyx/llm_with_structured_output_on_long_context/
| false | false |
self
| 1 | null |
DeepMind will delay sharing research to remain competitive
| 569 |
A [recent report](https://archive.ph/tkuum) in Financial Times claims that Google's DeepMind "has been holding back the release of its world-renowned research" to remain competitive. Accordingly the company will adopt a six-month embargo policy "before strategic papers related to generative AI are released".
In an interesting statement, a DeepMind researcher said he could "not imagine us putting out the transformer papers for general use now". Considering the impact of the DeepMind's transformer research on the development of LLMs, just think where we would have been now if they held back the research. The report also claims that some DeepMind staff left the company as their careers would be negatively affected if they are not allowed to publish their research.
I don't have any knowledge about the current impact of DeepMind's open research contributions. But just a couple of months ago we have been talking about the potential contributions the DeepSeek release will make. But as it gets competitive it looks like the big players are slowly becoming ~~Open~~ClosedAIs.
Too bad, let's hope that this won't turn into a general trend.
| 2025-04-01T17:17:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp1555/deepmind_will_delay_sharing_research_to_remain/
|
mayalihamur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp1555
| false | null |
t3_1jp1555
|
/r/LocalLLaMA/comments/1jp1555/deepmind_will_delay_sharing_research_to_remain/
| false | false |
self
| 569 |
{'enabled': False, 'images': [{'id': 'xFzNea-wfLdeY8nQYlmvy7ExUiqUMwR5qjfz-uU0e6o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?width=108&crop=smart&auto=webp&s=345779d2727b483bcc5c2b681fe828f40f2e3adf', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?width=216&crop=smart&auto=webp&s=34a1a7736980717648d7d4c42e0f5e80793088b5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?width=320&crop=smart&auto=webp&s=216d3d18feabd74d6e9fa5daf50ffda1b0b2246f', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?width=640&crop=smart&auto=webp&s=45d9eefb390a560e6c9820f4ea743463fb2baf5a', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?width=960&crop=smart&auto=webp&s=bb216620bac02881010ab1ea7df11a29944e5d38', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/1ZlRvQVd_wCzml6kDmIqqwfFK3kYlx5qDjXruEcg67w.jpg?auto=webp&s=87d255b37f69eaab27f4789d992a8d16c304b206', 'width': 1024}, 'variants': {}}]}
|
Download Fails (Official Instructions)
| 0 | 2025-04-01T17:30:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp1gjy/download_fails_official_instructions/
|
NoisyJalapeno
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp1gjy
| false | null |
t3_1jp1gjy
|
/r/LocalLLaMA/comments/1jp1gjy/download_fails_official_instructions/
| false | false | 0 | null |
||
Smallest model capable of detecting profane/nsfw language?
| 7 |
Hi all,
I have my first ever steam game about to be released in a week which I couldn't be more excited/nervous about. It is a singleplayer game but I have a global chat that allows people to talk to other people playing. It's a space game, and space is lonely, so I thought that'd be a fun aesthetic.
Anyways, it is in beta-testing phase right now and I had to ban someone for the first time today because of things they were saying over chat. It was a manual process and I'd like to automate the detection/flagging of unsavory messages.
Are <1b parameter models capable of outperforming a simple keyword check? I like the idea of an LLM because it could go beyond matching strings.
Also, if anyone is interested in trying it out, I'm handing out keys like crazy because I'm too nervous to charge $2.99 for the game and then underdeliver. Game info [here](https://x.com/hvent90), sorry for the self-promo.
| 2025-04-01T17:44:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp1sy8/smallest_model_capable_of_detecting_profanensfw/
|
ohcrap___fk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp1sy8
| false | null |
t3_1jp1sy8
|
/r/LocalLLaMA/comments/1jp1sy8/smallest_model_capable_of_detecting_profanensfw/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'j8ZR3G0ughDv-MC0qOYtZ-8_b_iU6PKwDYqd_YLxPgo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hdOluWd_x_JgyRadjDb_9bEaLy_MWgZphmvJZfzNXWo.jpg?width=108&crop=smart&auto=webp&s=08da0a9b05d671fc476aae61cfe48f606dcf643f', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/hdOluWd_x_JgyRadjDb_9bEaLy_MWgZphmvJZfzNXWo.jpg?auto=webp&s=abf2b70dfed9aed9709844cf8248ff7f0ccb141e', 'width': 200}, 'variants': {}}]}
|
DeepSeek 3FS: non-RDMA install, faster ecosystem app dev/testing.
| 0 | 2025-04-01T17:51:59 |
https://blog.open3fs.com/2025/04/01/deepseek-3fs-non-rdma-install-faster-ecosystem-app-dev-testing.html
|
HardCore_Dev
|
blog.open3fs.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp1zp0
| false | null |
t3_1jp1zp0
|
/r/LocalLLaMA/comments/1jp1zp0/deepseek_3fs_nonrdma_install_faster_ecosystem_app/
| false | false |
default
| 0 | null |
|
Gemini 2.5 pro exp 03-25
| 1 |
[removed]
| 2025-04-01T18:16:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp2mzk/gemini_25_pro_exp_0325/
|
Honesty_8941526
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp2mzk
| false | null |
t3_1jp2mzk
|
/r/LocalLLaMA/comments/1jp2mzk/gemini_25_pro_exp_0325/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'j49-qG8uJTHm-tT-t7mjXAGu4nxIF1SBpF5pmte4gaE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nobQpOiRFkYFUe0JMenSuA-Whf0Fr3a830r0rUlNAkw.jpg?width=108&crop=smart&auto=webp&s=4ee48d6b7c4ff92ff4cff9e0cff9b76b9a8a5d64', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nobQpOiRFkYFUe0JMenSuA-Whf0Fr3a830r0rUlNAkw.jpg?width=216&crop=smart&auto=webp&s=4d53b75b57782bccffdded8033be3a05a6648f4f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nobQpOiRFkYFUe0JMenSuA-Whf0Fr3a830r0rUlNAkw.jpg?width=320&crop=smart&auto=webp&s=052b235e39fd32831add9b29395485b94a0e4a23', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nobQpOiRFkYFUe0JMenSuA-Whf0Fr3a830r0rUlNAkw.jpg?auto=webp&s=00dbd532fc0fd4d964518e2d5bfbfa48a98ed720', 'width': 480}, 'variants': {}}]}
|
CloudGPU + Ollama vs. Front End Browser
| 1 |
[removed]
| 2025-04-01T18:22:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp2rmm/cloudgpu_ollama_vs_front_end_browser/
|
utility-only
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp2rmm
| false | null |
t3_1jp2rmm
|
/r/LocalLLaMA/comments/1jp2rmm/cloudgpu_ollama_vs_front_end_browser/
| false | false |
self
| 1 | null |
What AI models have you built for personal use?
| 1 |
[removed]
| 2025-04-01T18:31:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp30go/what_ai_models_have_you_built_for_personal_use/
|
angelayeyang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp30go
| false | null |
t3_1jp30go
|
/r/LocalLLaMA/comments/1jp30go/what_ai_models_have_you_built_for_personal_use/
| false | false |
self
| 1 | null |
I dove into MCP and how it can benefit from orchestration frameworks!
| 3 |
Spent some time writing about MCP (Model Context Protocol) and how it enables LLMs to talk to tools (like the Babel Fish in The Hitchhiker's Guide to the Galaxy).
Here's the synergy:
* **MCP:** Handles the *standardized communication* with any tool.
* **Orchestration:** Manages the agent's *internal plan/logic* – deciding *when* to use MCP, process data, or take other steps.
Together, you can build more complex, tool-using agents!
Attaching a link to the blog [here](https://theaiworld.substack.com/p/the-ai-babel-fish-mcp-pocketflow?r=65jr5&utm_campaign=post&utm_medium=web&triedRedirect=true). Would love your thoughts.
| 2025-04-01T18:33:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp31pu/i_dove_into_mcp_and_how_it_can_benefit_from/
|
ComfortableArm121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp31pu
| false | null |
t3_1jp31pu
|
/r/LocalLLaMA/comments/1jp31pu/i_dove_into_mcp_and_how_it_can_benefit_from/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'evzCj2lExEhgLPsIxFPhF3AwhSRUrWdxCIguZFMlWMw', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?width=108&crop=smart&auto=webp&s=5de24cd439989c406fd925a3ad02b0cdf8fabde2', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?width=216&crop=smart&auto=webp&s=047d0d90c270187b0e6c85959590bac968fe49c0', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?width=320&crop=smart&auto=webp&s=cb7b7a8eb99654333e67ab9eaf1c35e86f965390', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?width=640&crop=smart&auto=webp&s=378f1b34f98feb965ab874dd9852ed37ce560ef3', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?width=960&crop=smart&auto=webp&s=2cf34ea9367e134d87a68366f0a4f7e787d0a463', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1Iq0fYae4z6YNo-kHIaDNjr8uv_SkhK23nnHDRRZOXM.jpg?auto=webp&s=61b5830f3710f0d6fba30bd5980b089e877c1a85', 'width': 1024}, 'variants': {}}]}
|
Open Deep Research - build 20+ page reports, run locally or using online models
| 1 |
[removed]
| 2025-04-01T18:42:06 |
https://github.com/qx-labs/agents-deep-research
|
TheRedfather
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp39vb
| false | null |
t3_1jp39vb
|
/r/LocalLLaMA/comments/1jp39vb/open_deep_research_build_20_page_reports_run/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bzyCJCKiOfri7_EDQ1VBj4tZnrBAJg7MCZUd_sJit5E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=108&crop=smart&auto=webp&s=a3a02e163f562db76fce48ad44d99231b9b8f682', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=216&crop=smart&auto=webp&s=2ed074ea112d7b7e38a0e1f356e925595100d487', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=320&crop=smart&auto=webp&s=b20f7dafb87f8a6e9b5541006123ab95b634578d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=640&crop=smart&auto=webp&s=b12d17dbfdf55817865e34b7c85629f48f3e8468', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=960&crop=smart&auto=webp&s=04021d6b05ae183abd68a68ff4ff4bea8b91f076', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=1080&crop=smart&auto=webp&s=2e64814f0ced6bba269356657af6f831a26d7c50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?auto=webp&s=fbb772859b041fcf25c53e40d8f7e198d2c04952', 'width': 1200}, 'variants': {}}]}
|
|
What AI models have you built for personal use?
| 1 |
[removed]
| 2025-04-01T18:46:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp3dqn/what_ai_models_have_you_built_for_personal_use/
|
pantsareprisons
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp3dqn
| false | null |
t3_1jp3dqn
|
/r/LocalLLaMA/comments/1jp3dqn/what_ai_models_have_you_built_for_personal_use/
| false | false |
self
| 1 | null |
Any solid alternatives to OpenAI’s Deep Research Agent with API access or local deployment support that doesn't suck?
| 1 |
[removed]
| 2025-04-01T18:52:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp3j1i/any_solid_alternatives_to_openais_deep_research/
|
vishwa1238
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp3j1i
| false | null |
t3_1jp3j1i
|
/r/LocalLLaMA/comments/1jp3j1i/any_solid_alternatives_to_openais_deep_research/
| false | false |
self
| 1 | null |
Any solid alternatives to OpenAI’s Deep Research Agent with API access or local deployment support that doesn't suck?
| 1 |
[removed]
| 2025-04-01T18:57:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp3n4f/any_solid_alternatives_to_openais_deep_research/
|
vishwa1238
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp3n4f
| false | null |
t3_1jp3n4f
|
/r/LocalLLaMA/comments/1jp3n4f/any_solid_alternatives_to_openais_deep_research/
| false | false |
self
| 1 | null |
Open Deep Research - produces 20+ page reports using local or online models
| 1 |
[removed]
| 2025-04-01T19:03:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp3th4/open_deep_research_produces_20_page_reports_using/
|
TheRedfather
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp3th4
| false | null |
t3_1jp3th4
|
/r/LocalLLaMA/comments/1jp3th4/open_deep_research_produces_20_page_reports_using/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'bzyCJCKiOfri7_EDQ1VBj4tZnrBAJg7MCZUd_sJit5E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=108&crop=smart&auto=webp&s=a3a02e163f562db76fce48ad44d99231b9b8f682', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=216&crop=smart&auto=webp&s=2ed074ea112d7b7e38a0e1f356e925595100d487', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=320&crop=smart&auto=webp&s=b20f7dafb87f8a6e9b5541006123ab95b634578d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=640&crop=smart&auto=webp&s=b12d17dbfdf55817865e34b7c85629f48f3e8468', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=960&crop=smart&auto=webp&s=04021d6b05ae183abd68a68ff4ff4bea8b91f076', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?width=1080&crop=smart&auto=webp&s=2e64814f0ced6bba269356657af6f831a26d7c50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WnQCiz62boEbqU7SdmUk0rY8UMUCAZD2srEfLBHV4FU.jpg?auto=webp&s=fbb772859b041fcf25c53e40d8f7e198d2c04952', 'width': 1200}, 'variants': {}}]}
|
Google Code Assist Prompt (Maybe?)
| 1 |
[removed]
| 2025-04-01T19:19:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jp477q/google_code_assist_prompt_maybe/
|
Alert_Anything_6325
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jp477q
| false | null |
t3_1jp477q
|
/r/LocalLLaMA/comments/1jp477q/google_code_assist_prompt_maybe/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.