title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
So, Quasar Alpha might actually be OpenAI's model
| 180 | 2025-04-10T16:58:12 |
-Cacique
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2tbk
| false | null |
t3_1jw2tbk
|
/r/LocalLLaMA/comments/1jw2tbk/so_quasar_alpha_might_actually_be_openais_model/
| false | false | 180 |
{'enabled': True, 'images': [{'id': 'jEKqkSlF1HPSrMOYpepZN-neaiszQrWzMPNLVvRdcuE', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/3xztmaoxf1ue1.png?width=108&crop=smart&auto=webp&s=221ec26e51c97879c6b9d87d74c9da867e597a27', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/3xztmaoxf1ue1.png?width=216&crop=smart&auto=webp&s=2df903d97bd3a7fb07a2e78ff09a73ac14533079', 'width': 216}, {'height': 285, 'url': 'https://preview.redd.it/3xztmaoxf1ue1.png?width=320&crop=smart&auto=webp&s=8adc799a917013ccb0e68e9c4a9f2e740c253da1', 'width': 320}, {'height': 570, 'url': 'https://preview.redd.it/3xztmaoxf1ue1.png?width=640&crop=smart&auto=webp&s=0979fac46ea6a31825df6db93efc19bf9cce1a9b', 'width': 640}], 'source': {'height': 666, 'url': 'https://preview.redd.it/3xztmaoxf1ue1.png?auto=webp&s=7cf4a39df6b89775ae9639db15d8b39c770bb815', 'width': 747}, 'variants': {}}]}
|
|||
Anyone running ollama with github copilot?
| 8 |
What model are you using?
| 2025-04-10T16:58:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw2tpg/anyone_running_ollama_with_github_copilot/
|
salvadorabledali
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw2tpg
| false | null |
t3_1jw2tpg
|
/r/LocalLLaMA/comments/1jw2tpg/anyone_running_ollama_with_github_copilot/
| false | false |
self
| 8 | null |
New OpenRouter stealth model has the same Chinese tokenizer bug - likely another OpenAI model
| 16 |
OpenRouter has released a second stealth model, **optimus-alpha**. After testing, I found this new model still has the same bug as before. You can find the same issue and an explanation of this bug in [my previous post](https://www.reddit.com/r/LocalLLaMA/comments/1jrd0a9/chinese_response_bug_in_tokenizer_suggests/).
[Still Unfixed](https://preview.redd.it/wvw8ir7jg1ue1.png?width=2384&format=png&auto=webp&s=4d804b76708a7077eb229f6b819a8aafe094de17)
btw, Sam Altman today replied in a [Twitter thread](https://x.com/sama/status/1910363838001869199) with:
>"quasars are very bright things!"
This hints that the previous model came from OpenAI.
| 2025-04-10T17:09:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw33gx/new_openrouter_stealth_model_has_the_same_chinese/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw33gx
| false | null |
t3_1jw33gx
|
/r/LocalLLaMA/comments/1jw33gx/new_openrouter_stealth_model_has_the_same_chinese/
| false | false | 16 | null |
|
Best AI models/tools/services to translate documents?
| 4 |
Just looking for models/tools/services that others have tried for the use case of translating (markdown) documents.
Any recommendations?
| 2025-04-10T17:19:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw3c0s/best_ai_modelstoolsservices_to_translate_documents/
|
punkpeye
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3c0s
| false | null |
t3_1jw3c0s
|
/r/LocalLLaMA/comments/1jw3c0s/best_ai_modelstoolsservices_to_translate_documents/
| false | false |
self
| 4 | null |
B200 vs H100 Training Benchmark: Up to 57% Faster Throughput
| 30 | 2025-04-10T17:33:26 |
https://www.lightly.ai/blog/nvidia-b200-vs-h100
|
igorsusmelj
|
lightly.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3olo
| false | null |
t3_1jw3olo
|
/r/LocalLLaMA/comments/1jw3olo/b200_vs_h100_training_benchmark_up_to_57_faster/
| false | false |
default
| 30 | null |
|
Future of voice AI models?
| 1 |
[removed]
| 2025-04-10T17:35:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw3qn6/future_of_voice_ai_models/
|
Acceptable_Gain7192
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3qn6
| false | null |
t3_1jw3qn6
|
/r/LocalLLaMA/comments/1jw3qn6/future_of_voice_ai_models/
| false | false |
self
| 1 | null |
llama 4 trained on depseek?
| 2 |
I'm using one of unsloth quants.
When I speak with it in my native language (polish) it throws occasional chinese characters. What's odd it happens usually on numbers or markdown elements.
Was it stated anywhere that they have used depseek?
| 2025-04-10T17:38:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw3t2c/llama_4_trained_on_depseek/
|
kweglinski
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3t2c
| false | null |
t3_1jw3t2c
|
/r/LocalLLaMA/comments/1jw3t2c/llama_4_trained_on_depseek/
| false | false |
self
| 2 | null |
Is there somebody know deepseek r 2 release date?
| 1 |
[removed]
| 2025-04-10T17:42:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw3w21/is_there_somebody_know_deepseek_r_2_release_date/
|
Unusual-Citron490
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3w21
| false | null |
t3_1jw3w21
|
/r/LocalLLaMA/comments/1jw3w21/is_there_somebody_know_deepseek_r_2_release_date/
| false | false |
self
| 1 | null |
I have 150 USD budget for LLM interfere benchmarking. How should I use
| 2 |
I am working on a project with local LLM (72B) model. Sofar I have used ollama and llama.cpp to inference in A6000 GPU, the performance is not that great. I tried to run using VLLM, but got out of memory error.
I am looking to benchmark on different GPUs, preferably EC2 instance. I want to which one should I try and what kind of benchmarkings I can run.
At present I tried to measure time to generate data 2 sentence response, 20 sentence response and 200 sentence response.
| 2025-04-10T17:43:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw3x5a/i_have_150_usd_budget_for_llm_interfere/
|
Ahmad401
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3x5a
| false | null |
t3_1jw3x5a
|
/r/LocalLLaMA/comments/1jw3x5a/i_have_150_usd_budget_for_llm_interfere/
| false | false |
self
| 2 | null |
NVIDIA DGX Spark - 4TB - $3,999
| 1 |
[removed]
| 2025-04-10T17:44:37 |
SeanP_AI
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw3ycr
| false | null |
t3_1jw3ycr
|
/r/LocalLLaMA/comments/1jw3ycr/nvidia_dgx_spark_4tb_3999/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Fc5Lq-Vjz7mvHEjZq8LvUkg7T-bU5-oMUKyCu1KRkKg', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=108&crop=smart&auto=webp&s=3aa5e222be945e473593a55ce9eff7e39ce5bccd', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=216&crop=smart&auto=webp&s=dd3ec0b81d1adb45608072800696251b4ee40bc2', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=320&crop=smart&auto=webp&s=f0ac8d15eedbadc1294f6429b00ead2b242381b5', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=640&crop=smart&auto=webp&s=ed3be8b668842da52e38bd50b2ed3cdf25327280', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=960&crop=smart&auto=webp&s=7e0032aa621e2f04863b5623b6636c3385fa8f61', 'width': 960}, {'height': 561, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?width=1080&crop=smart&auto=webp&s=2c8da5a1241b0db988c5852a923aed3ceea943c6', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/ta5ayo6do1ue1.png?auto=webp&s=61d1dcf9d274ebefe98c43b438cac732f7589a8b', 'width': 2954}, 'variants': {}}]}
|
||
Running expressive AI voice locally for $1/hr — Orpheus TTS w/ sub-250ms latency
| 1 |
[removed]
| 2025-04-10T18:04:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw4g4u/running_expressive_ai_voice_locally_for_1hr/
|
gamechefio
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw4g4u
| false | null |
t3_1jw4g4u
|
/r/LocalLLaMA/comments/1jw4g4u/running_expressive_ai_voice_locally_for_1hr/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nN_Tmf2jyJSDv3rf1j9cU5lDvJOTSis4mokIVhIFhWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=108&crop=smart&auto=webp&s=5a5e7dab148c93ac1c972bbed122d0aef3cd83d2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=216&crop=smart&auto=webp&s=d70094601e9d602b8262006a970cd7ba2b31601b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=320&crop=smart&auto=webp&s=82fbeea3f7ed698e525e86195ae0bf10f91bec34', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=640&crop=smart&auto=webp&s=d7b8358d4645828f8f481b4b10f5fb47b8c67ec8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=960&crop=smart&auto=webp&s=b87d72b75a4dd1be8032866774dda7f871c92756', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?width=1080&crop=smart&auto=webp&s=110f5576907678fb0046e6259bdee33fe9d5555a', 'width': 1080}], 'source': {'height': 806, 'url': 'https://external-preview.redd.it/dBf4kGzZbdfQ2ZzxU-Okb5AOvIFm_5vkxBNmFuNW-r4.jpg?auto=webp&s=4515fd17ba92f8984d5187337079c3d6cbcfcfcd', 'width': 1535}, 'variants': {}}]}
|
Suggestions on for an uncensored LLM with Vision and image generation support?
| 0 |
I normally don't mess with LLM's so I am not up to speed on any of the latest models, forks, and releases.
I am looking for an uncensored LLM that I can run locally (1x 3090), that supports vision for image processing and imaging generation / modification.
Examples, make this car blue instead of red. Make this person skinnier, show this person with a beard. etc.
| 2025-04-10T18:14:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw4p42/suggestions_on_for_an_uncensored_llm_with_vision/
|
DataGOGO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw4p42
| false | null |
t3_1jw4p42
|
/r/LocalLLaMA/comments/1jw4p42/suggestions_on_for_an_uncensored_llm_with_vision/
| false | false |
self
| 0 | null |
Fine-Tuning Llama 4: A Guide With Demo Project
| 15 |
In this blog, I will show you how to fine-tune Llama 4 Scout for just $10 using the RunPod platform. You will learn:
1. How to set up RunPod and create a multi-GPU pod
2. How to load the model and tokenizer
3. How to prepare and process the dataset
4. How to set up the trainer and test the model
5. How to compare models
6. How to save the model to the Hugging Face repository
| 2025-04-10T18:17:23 |
https://www.datacamp.com/tutorial/fine-tuning-llama-4
|
kingabzpro
|
datacamp.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw4rag
| false | null |
t3_1jw4rag
|
/r/LocalLLaMA/comments/1jw4rag/finetuning_llama_4_a_guide_with_demo_project/
| false | false |
default
| 15 | null |
What is the best scalable scraper tool right now? Firecrawl is great, but I want to explore more options
| 1 |
I’ve been using Firecrawl lately (which is great), but I’m more curious what others are using right now for a scalable scraping like large sites or dynamic contents . I am familiar with the old-school BeautifulSoup/Selenium way but i kind of feel left out on a reliable scrapper tool.
Are there any newer frameworks or scrapers that stand out right now?
Would love some recommendation or experiences.
| 2025-04-10T18:23:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw4wkb/what_is_the_best_scalable_scraper_tool_right_now/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw4wkb
| false | null |
t3_1jw4wkb
|
/r/LocalLLaMA/comments/1jw4wkb/what_is_the_best_scalable_scraper_tool_right_now/
| false | false |
self
| 1 | null |
What is the best scraper tool right now? Firecrawl is great, but I want to explore more options
| 26 |
I’ve been using Firecrawl lately (which is great), but I’m more curious what others are using right now for a scalable scraping like large sites or dynamic contents . I am familiar with the old-school BeautifulSoup/Selenium way but i kind of feel left out on a reliable scrapper tool.
Are there any newer frameworks or scrapers that stand out right now?
Would love to hear some recommendation or experiences.
| 2025-04-10T18:25:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw4yqv/what_is_the_best_scraper_tool_right_now_firecrawl/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw4yqv
| false | null |
t3_1jw4yqv
|
/r/LocalLLaMA/comments/1jw4yqv/what_is_the_best_scraper_tool_right_now_firecrawl/
| false | false |
self
| 26 | null |
Optimus Alpha released
| 1 |
[removed]
| 2025-04-10T18:40:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw5bic/optimus_alpha_released/
|
fiftyJerksInOneHuman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw5bic
| false | null |
t3_1jw5bic
|
/r/LocalLLaMA/comments/1jw5bic/optimus_alpha_released/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ohSs-BcRZrN3Qw6eTV57UPcAB3QtlzblPLdM_8bqsb4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=108&crop=smart&auto=webp&s=18fb75ecfc5ac8b1494bce3ba0a20cdd39bc0bbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=216&crop=smart&auto=webp&s=2a012c1238259a89ddcb528c2bcef4987809dff6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=320&crop=smart&auto=webp&s=f2f23f1a70c55a262ef52172046ea16109a6b406', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=640&crop=smart&auto=webp&s=3b682c4f5e6cd4068630df47b6f48a33d95cc272', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=960&crop=smart&auto=webp&s=eb1f453527dc395f134f8d0f9ec26ff2eea85b17', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?width=1080&crop=smart&auto=webp&s=e6939f3928099dfe16d9371d01cd8e9878a613ef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5QNRFxErrMXxEaKMRT1EIpdxKmOqegWZpZImNNkU4Bc.jpg?auto=webp&s=b711216ca0905a02a8b72af128813e2ca7ad59d5', 'width': 1200}, 'variants': {}}]}
|
Ollama not using GPU, need help.
| 2 |
So I've been running models locally on my 7900GRE machine, and they were working fine, so I decided to try getting small models working on my laptop (which is pretty old). I updated my CUDA drivers, and my graphics drivers. I installed ollama and gemma3:4b because I only have 4GB VRAM, and it should fit, but it was only running on my CPU and integrated graphics (the GPU utilization in the nvidia control panel wasn't spiking), so I tried the 1b model, and even that didn't use my GPU. I tried disabling the integrated graphics, and it ran even slower, so I knew that it was using that at least, but I don't know why it's not using my GPU. any idea what I can do? should I try running the linux ollama through wsl2 or something? Is this even possible?
For context the laptop specs are : CPU-intel xeon E3 v5, GPU-Nvidia Quadro M2200, 64GB RAM.
| 2025-04-10T18:53:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw5m8k/ollama_not_using_gpu_need_help/
|
StarWingOwl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw5m8k
| false | null |
t3_1jw5m8k
|
/r/LocalLLaMA/comments/1jw5m8k/ollama_not_using_gpu_need_help/
| false | false |
self
| 2 | null |
DeepSeek-V3 685B tuning time per 1k rows.
| 1 |
[removed]
| 2025-04-10T19:04:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw5wh5/deepseekv3_685b_tuning_time_per_1k_rows/
|
GenLabsAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw5wh5
| false | null |
t3_1jw5wh5
|
/r/LocalLLaMA/comments/1jw5wh5/deepseekv3_685b_tuning_time_per_1k_rows/
| false | false |
self
| 1 | null |
Openai New Memory feature is just Vector Search?
| 100 |
I don't get what's the big deal about this?
they are simply creating the embeddings for past chats and doing a vector search and adding chunks to context for every prompt right?
I've (we've) made this stuff 3 years ago, I don't get it, what am I missing?
| 2025-04-10T19:23:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw6cdk/openai_new_memory_feature_is_just_vector_search/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw6cdk
| false | null |
t3_1jw6cdk
|
/r/LocalLLaMA/comments/1jw6cdk/openai_new_memory_feature_is_just_vector_search/
| false | false |
self
| 100 | null |
Authentic Generalization Reasoning Tests
| 0 |
[https://llm-benchmark.github.io/](https://llm-benchmark.github.io/)
**Unlike common benchmarks, which focus on resistance to memory and fitting, Simplicity Unveils Truth: The Authentic Test of Generalization**
Something amazing that when testing, I found that there were 3 questions that O1 PRO and O3 MH always got wrong before, but when I created this page, O3 MINI HIGH answered them all correctly. I was shocked! I did discuss those 3 questions, but I only listed the wrong answers of the model, and there was no correct answer (maybe there was 1?). I don't know if this is a coincidence. Anyway, I don't believe it ,so I replaced those 3 questions with similar ones. Now grok performs well?
All listed models have been tested, but the response may not be complete
https://preview.redd.it/7g92qlsj82ue1.png?width=1024&format=png&auto=webp&s=919865f4276230b153e176321511b46e4c09e03a
https://preview.redd.it/w0gn0fno82ue1.png?width=1536&format=png&auto=webp&s=d6830f2dfed7c09b11551eaf0a784c35fc8c6bb2
| 2025-04-10T19:38:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw6pm1/authentic_generalization_reasoning_tests/
|
flysnowbigbig
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw6pm1
| false | null |
t3_1jw6pm1
|
/r/LocalLLaMA/comments/1jw6pm1/authentic_generalization_reasoning_tests/
| false | false | 0 | null |
|
Today, what are the go to front-ends for training LoRAs and fine-tuning?
| 12 |
Hi, been out of the game for a while so I'm hoping someone could direct me to whatever front ends are most popular these days that offer LoRA training and ideally fine-tuning. I still have oobabooga's text-gen-webui installed if that is still popular.
Thanks in advance
| 2025-04-10T20:03:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw7b63/today_what_are_the_go_to_frontends_for_training/
|
X3liteninjaX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw7b63
| false | null |
t3_1jw7b63
|
/r/LocalLLaMA/comments/1jw7b63/today_what_are_the_go_to_frontends_for_training/
| false | false |
self
| 12 | null |
Seeking Advice fintuning
| 9 |
Hello, i am still new to fine tuning trying to learn by doing projects.
Currently im trying to fine tune a model with unsloth, i found a dataset in hugging face and have done the first project, the results were fine (based on training and evaluation loss).
So in my second project i decided to prepare my own data, i have pdf files with plain text and im trying to transform them into a question answer format as i read somewhere that this format is necessary to fine tune models. I find this a bit odd as acquiring such format could be nearly impossible.
So i came up with two approaches, i extracted the text from the files into small chnuks. First one is to use some nlp technics and pre trained model to generate questions or queries based on those chnuks results were terrible maybe im doing something wrong but idk. Second one was to only use one feature which is the chunks only 215 row . Dataset shape is (215, 1) I trained it on 2000steps and notice an
overfitting by measuring the loss of both training and testing test loss was 3 point something and traing loss was 0.00…somthing.
My questions are:
- How do you prepare your data if you have pdf files with plain text my case (datset about law)
- what are other evaluation metrics you do
- how do you know if your model ready for real world deployment
| 2025-04-10T20:10:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw7hs8/seeking_advice_fintuning/
|
Gold-Artichoke-9288
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw7hs8
| false | null |
t3_1jw7hs8
|
/r/LocalLLaMA/comments/1jw7hs8/seeking_advice_fintuning/
| false | false |
self
| 9 | null |
Curious how your LLM performs in real-world conversations?
| 1 |
[removed]
| 2025-04-10T20:18:48 |
https://huggingface.co/spaces/elbasri/llm-eval-lab
|
No-Syllabub-2
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw7oqg
| false | null |
t3_1jw7oqg
|
/r/LocalLLaMA/comments/1jw7oqg/curious_how_your_llm_performs_in_realworld/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '6Ytbs0EGTH44N6zzwP-eTqXU7t-rIaewbWDB6k68fGE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=108&crop=smart&auto=webp&s=a59d6b9fd2f979b785ca90f14af7296a109f0a3c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=216&crop=smart&auto=webp&s=9513d5ce653cc4e85b47573f0837ef5689bf71dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=320&crop=smart&auto=webp&s=38c49290031b0bd506ced00c89f044571f77410b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=640&crop=smart&auto=webp&s=230e9cc07e574be549152f7da941a3694c4bfcdb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=960&crop=smart&auto=webp&s=9389a8a7798807d843f77b9652c3495ddee148d9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?width=1080&crop=smart&auto=webp&s=9c7be333d09077608f3bfe57aa1dcb05ef7a1246', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vyGmAiz2oecSDYRge-Lh3cbza3CeGa07s2WfO6L5ycA.jpg?auto=webp&s=087949388f543fca13766e2e10605b6de9f060a1', 'width': 1200}, 'variants': {}}]}
|
|
Llama writes Kendrick Lamar-style diss track about Kendrick Lamar | Llama-4-Scout
| 1 |
[removed]
| 2025-04-10T20:19:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw7ppy/llama_writes_kendrick_lamarstyle_diss_track_about/
|
Worldly_Evidence9113
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw7ppy
| false | null |
t3_1jw7ppy
|
/r/LocalLLaMA/comments/1jw7ppy/llama_writes_kendrick_lamarstyle_diss_track_about/
| false | false |
self
| 1 | null |
Looking for a Windows app to run Vision Enabled LLM
| 4 |
Trying to run Mistral Small 3.1 24B LLM with LM Studio. The model I have is Vision enabled, but it looks like LM Studio supports Images.
Any suggestions on what to use?
| 2025-04-10T20:35:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw83m0/looking_for_a_windows_app_to_run_vision_enabled/
|
ebonydad
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw83m0
| false | null |
t3_1jw83m0
|
/r/LocalLLaMA/comments/1jw83m0/looking_for_a_windows_app_to_run_vision_enabled/
| false | false |
self
| 4 | null |
Orpheus TTS released multilingual support
| 82 |
I couldn’t find a thread on this here so far.
CanopyAI released new models for their Orpheus TTS model for different languages.
More info here:
https://github.com/canopyai/Orpheus-TTS
And here:
https://huggingface.co/collections/canopylabs/orpheus-multilingual-research-release-67f5894cd16794db163786ba
They also released a training guide, and there are already some finetunes floating around on HF and the first gguf versions.
| 2025-04-10T21:16:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw91nh/orpheus_tts_released_multilingual_support/
|
YearnMar10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw91nh
| false | null |
t3_1jw91nh
|
/r/LocalLLaMA/comments/1jw91nh/orpheus_tts_released_multilingual_support/
| false | false |
self
| 82 |
{'enabled': False, 'images': [{'id': 'FnyJLUPm5UM2EAf3PGN0BHrCt6tC5r4qAVp3Y093EXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=108&crop=smart&auto=webp&s=d59189d4c1789bcd2327a1687ea6ca2460ed7de1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=216&crop=smart&auto=webp&s=66a6234e801c8a0359b728f5ca455772bac6c2b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=320&crop=smart&auto=webp&s=58a7270c846c24b32d95452a19636c03a78798d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=640&crop=smart&auto=webp&s=4f976a069ed7d75b1edbe4ec2bed7f69920b105c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=960&crop=smart&auto=webp&s=595c8c140eef1a536bfc142445f28c9c486a989b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?width=1080&crop=smart&auto=webp&s=82931e70f2e9898d33577b9efa66a4c5f5894738', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HcHdNPqStSvGyYQX-1mfakgrN030ABfIjqCjzDOmdS4.jpg?auto=webp&s=54e162f9538669203b45f1f1754c0bb264cdfc5d', 'width': 1200}, 'variants': {}}]}
|
Experience "YourStory" in Action: Watch the Demo Video & Join the Adventure!
| 0 |
Hey Reddit,
I wanted to share an exciting update on **YourStory**, my interactive text-based RPG where viewers shape the adventure. Check out this YouTube demo to see how AI-driven narration, visuals and audio come together for a unique storytelling experience: [YouTube Demo](https://www.youtube.com/watch?v=bjOxTWpKHWs).
For more details on what makes **YourStory** special, how it works, and the plans for future features, check out my earlier post here: [Original Post](https://www.reddit.com/r/LocalLLaMA/comments/1j5p7mw/help_test_yourstory_a_new_interactive_rpg_on/)
I'm currently in the testing phase and would love your feedback to improve the system. Join me on Twitch at [TheStarAI](https://www.twitch.tv/thestarai)to be part of this innovative storytelling experience. _probably offline right now_
Looking forward to your thoughts and participation!
| 2025-04-10T21:19:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw94kp/experience_yourstory_in_action_watch_the_demo/
|
Affectionate-Leg8133
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw94kp
| false | null |
t3_1jw94kp
|
/r/LocalLLaMA/comments/1jw94kp/experience_yourstory_in_action_watch_the_demo/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '5Q6hjtFlFWqVP9XI1rQVDGo5J6bxXrEFN21GIHRBOxI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lLHyivKy1pSQgxpNbrg65vFqp08gj-hP7aEXC7eaGnU.jpg?width=108&crop=smart&auto=webp&s=fdf8d2f63858d93af6e0f30ee55bb591d29ead89', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/lLHyivKy1pSQgxpNbrg65vFqp08gj-hP7aEXC7eaGnU.jpg?width=216&crop=smart&auto=webp&s=88e4985134e2f579abd569cf88e283658e1eae0a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/lLHyivKy1pSQgxpNbrg65vFqp08gj-hP7aEXC7eaGnU.jpg?width=320&crop=smart&auto=webp&s=f87db1d3324ff8719a0e5909354846ad153a8a3c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/lLHyivKy1pSQgxpNbrg65vFqp08gj-hP7aEXC7eaGnU.jpg?auto=webp&s=749b8f27c66c7a1c8129039efa33d33a86e1e47f', 'width': 480}, 'variants': {}}]}
|
Fiction.liveBench: new Grok 3 scores are solid, llama 4 scores improved after vllm fixes
| 57 | 2025-04-10T21:29:24 |
fictionlive
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw9cn3
| false | null |
t3_1jw9cn3
|
/r/LocalLLaMA/comments/1jw9cn3/fictionlivebench_new_grok_3_scores_are_solid/
| false | false | 57 |
{'enabled': True, 'images': [{'id': 'quwhKCMGdTiPK04djOHqgw5V3D6mjZpHvc0hgS105vA', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=108&crop=smart&auto=webp&s=f8ddd77e1b49b4e3f546ad6a2853bc75fbb3a8b6', 'width': 108}, {'height': 245, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=216&crop=smart&auto=webp&s=1a44b80c272e7e64f6568f3ece81ba0a964603c6', 'width': 216}, {'height': 363, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=320&crop=smart&auto=webp&s=20146fedfc73adca709ce6370cdf2823aa19d742', 'width': 320}, {'height': 727, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=640&crop=smart&auto=webp&s=660231214b036de1e98f805d80a58cd827280c9f', 'width': 640}, {'height': 1090, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=960&crop=smart&auto=webp&s=9b75caf8100301ddadeb3e2b8a9b936d6db12f22', 'width': 960}, {'height': 1227, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?width=1080&crop=smart&auto=webp&s=3ac79a972d7d8b5789f9e17e8257080f0bc089fe', 'width': 1080}], 'source': {'height': 2234, 'url': 'https://preview.redd.it/6tbbef5js2ue1.png?auto=webp&s=af381a980b7660f537e25a65915db66b8c8125a0', 'width': 1966}, 'variants': {}}]}
|
|||
Macbook Pro M4 Max inference speeds
| 211 |
I had trouble finding this kind of information when I was deciding on what Macbook to buy so putting this out there to help future purchase decisions:
Macbook Pro 16" M4 Max 36gb 14‑core CPU, 32‑core GPU, 16‑core Neural
During inference, cpu/gpu temps get up to 103C and power draw is about 130W.
36gb ram allows me to comfortably load these models and still use my computer as usual (browsers, etc) without having to close every window. However, I do no need to close programs like Lightroom and Photoshop to make room.
Finally, the nano texture glass is worth it...
| 2025-04-10T21:32:31 |
SufficientRadio
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw9fba
| false | null |
t3_1jw9fba
|
/r/LocalLLaMA/comments/1jw9fba/macbook_pro_m4_max_inference_speeds/
| false | false | 211 |
{'enabled': True, 'images': [{'id': '731suHEEqnfT5yuuEzZcQSZdPN8A0-BdOq9FB4wubwA', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=108&crop=smart&auto=webp&s=8b6818e80556b32ea5f9324bcd05308955240705', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=216&crop=smart&auto=webp&s=3b2b89971515ba28002a2516dca29badd897ffb0', 'width': 216}, {'height': 254, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=320&crop=smart&auto=webp&s=23ca6733d109d411ba15805b868f11dbe11944e5', 'width': 320}, {'height': 508, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=640&crop=smart&auto=webp&s=fa3ff737f60bcdc49e3d199033b76539d0bc294d', 'width': 640}, {'height': 762, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=960&crop=smart&auto=webp&s=bc53a8be4c1757146648624ee3585240030fc150', 'width': 960}, {'height': 858, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?width=1080&crop=smart&auto=webp&s=8af5fcd722fc40697bfc33412aa54ff1c5e63e39', 'width': 1080}], 'source': {'height': 1214, 'url': 'https://preview.redd.it/bms6abl5s2ue1.png?auto=webp&s=c3ba39a8993df0c241400ed23666cd2f9eebe98c', 'width': 1528}, 'variants': {}}]}
|
||
If you’re working with open-source AI, I’d love your opinion
| 4 |
I’m a student writing a thesis on open vs. closed AI development, specifically focusing on why the U.S. should be incentivizing open-source AI. I’m hoping to include insight from people actively involved in the open-source community or reliant on it for their business.
If you’re up for it, I’d be really grateful if you could briefly email me your answer to one or both of these questions:
1. In your view, what is the most compelling reason to support open-source AI development?
2. Some argue that open-source AI could pose national security risks. Do you believe the benefits of openness outweigh those risks?
Feel free to shoot me a quick reply at [email protected] sometime today or tomorrow morning. It doesn’t need to be long, just a few sentences would really help. Thank you so much!
| 2025-04-10T21:38:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jw9ki6/if_youre_working_with_opensource_ai_id_love_your/
|
EthanLikesAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw9ki6
| false | null |
t3_1jw9ki6
|
/r/LocalLLaMA/comments/1jw9ki6/if_youre_working_with_opensource_ai_id_love_your/
| false | false |
self
| 4 | null |
Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
| 415 | 2025-04-10T21:51:41 |
https://www.404media.co/facebook-pushes-its-llama-4-ai-model-to-the-right-wants-to-present-both-sides/
|
WanderingStranger0
|
404media.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw9upz
| false | null |
t3_1jw9upz
|
/r/LocalLLaMA/comments/1jw9upz/facebook_pushes_its_llama_4_ai_model_to_the_right/
| false | false | 415 |
{'enabled': False, 'images': [{'id': 'o1KNduH8DGXvl7PGMSWYLz31GaQcVaNRS8T7SBLECpA', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/03uwcLJN0ZpAJgNqUGMgr-4V8aM1Tl_IIQXCSo0btJA.jpg?width=108&crop=smart&auto=webp&s=0e8511f66eda7bc867f9ce7b6fa615f364f7d684', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/03uwcLJN0ZpAJgNqUGMgr-4V8aM1Tl_IIQXCSo0btJA.jpg?width=216&crop=smart&auto=webp&s=d1d8cfb0bb1314ee415df48379b94880b596f0bd', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/03uwcLJN0ZpAJgNqUGMgr-4V8aM1Tl_IIQXCSo0btJA.jpg?width=320&crop=smart&auto=webp&s=7429c00247568db3b0b3bb198a0800895cca70a6', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/03uwcLJN0ZpAJgNqUGMgr-4V8aM1Tl_IIQXCSo0btJA.jpg?width=640&crop=smart&auto=webp&s=f1a58da6c992c3251df315afbe32e193f0741c63', 'width': 640}], 'source': {'height': 594, 'url': 'https://external-preview.redd.it/03uwcLJN0ZpAJgNqUGMgr-4V8aM1Tl_IIQXCSo0btJA.jpg?auto=webp&s=a54a53c9869dbdcd37286ee83592b81065534bb4', 'width': 916}, 'variants': {}}]}
|
||
GPU Poor models on my own Brazilian legal benchmark
| 1 |
[removed]
| 2025-04-10T21:57:57 |
celsowm
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jw9zp5
| false | null |
t3_1jw9zp5
|
/r/LocalLLaMA/comments/1jw9zp5/gpu_poor_models_on_my_own_brazilian_legal/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'K1J7gHMgeQ-KtvHcTANwS9EV9xAmbuQmW7XVTIS_fMw', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=108&crop=smart&auto=webp&s=db44bcf29adb0edf177fb882fe1a593d50f09d9b', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=216&crop=smart&auto=webp&s=169cbf6b0965daab3bf2c0a043ed98b995134788', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=320&crop=smart&auto=webp&s=6196cd6b1ec89bbe3384123483a2278b27ac0e2b', 'width': 320}, {'height': 322, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=640&crop=smart&auto=webp&s=9bc85c90fe653b285798c01ffdda36d4a1a0951e', 'width': 640}, {'height': 483, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=960&crop=smart&auto=webp&s=ccc75bd619a433cc101703451e708ae13cb942b9', 'width': 960}, {'height': 543, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?width=1080&crop=smart&auto=webp&s=84d565265a61110d2dd2a5733faddb76c2df520a', 'width': 1080}], 'source': {'height': 967, 'url': 'https://preview.redd.it/rsawnukgx2ue1.png?auto=webp&s=870011c2586f5a88aa2f65d3ffe0996386aac465', 'width': 1920}, 'variants': {}}]}
|
||
Optimus Alpha — Better than Quasar Alpha and so FAST
| 0 | 2025-04-10T22:42:54 |
https://v.redd.it/il0iy5ej53ue1
|
sirjoaco
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwaztk
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/il0iy5ej53ue1/DASHPlaylist.mpd?a=1746916996%2CZjA3NWQ5MDc5YWFlY2EwMGYzZjU0ODI5NGRiY2YyNmRkYmZiNzM1ZjFjM2ZjNDM2MTExZDE5OWE0MGEwYzhjZQ%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/il0iy5ej53ue1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/il0iy5ej53ue1/HLSPlaylist.m3u8?a=1746916996%2CNjY1NjI0YmFjNjE0NDAzZDAzMjM1MzcyMzQ5ZTM3Njc1ZjkyMjI2MzU4OTNlOGQwZGI3Y2I2ZTc0MWVlM2JhMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/il0iy5ej53ue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1730}}
|
t3_1jwaztk
|
/r/LocalLLaMA/comments/1jwaztk/optimus_alpha_better_than_quasar_alpha_and_so_fast/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=108&crop=smart&format=pjpg&auto=webp&s=de3e555f0b97ee7663b466c510a1f3669f524ba9', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=216&crop=smart&format=pjpg&auto=webp&s=91e2d1e4bfbc19a9ec6bc26edafa721851c7a115', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=320&crop=smart&format=pjpg&auto=webp&s=7539b8ec74e7811d47dca8464f5066bea58c34cd', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=640&crop=smart&format=pjpg&auto=webp&s=94d4a38a3a86ea3b349321293ec61a8e00b56fa4', 'width': 640}, {'height': 599, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=960&crop=smart&format=pjpg&auto=webp&s=bb3df8da7d72bf6263809ac80b20a23052c83237', 'width': 960}, {'height': 674, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f3420939596c347baca09ec2c3fe43b434c3d07f', 'width': 1080}], 'source': {'height': 1888, 'url': 'https://external-preview.redd.it/eGhnZ281ZWo1M3VlMer07yEnZyzUzOGKLchEm7Ou-ZqgsURV4YpgaenXqmh6.png?format=pjpg&auto=webp&s=85ec0cfaa26867bed836b2aa4fc8fcd24ba59806', 'width': 3024}, 'variants': {}}]}
|
||
Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)
| 1 |
[removed]
| 2025-04-10T23:08:39 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwbjjs
| false |
{'oembed': {'author_name': 'Slinging Bits', 'author_url': 'https://www.youtube.com/@SlingingBits', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aiISDmnODzo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/aiISDmnODzo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jwbjjs
|
/r/LocalLLaMA/comments/1jwbjjs/llama4maverick17b128einstruct_benchmark_mac/
| false | false |
default
| 1 | null |
||
Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)
| 1 |
[removed]
| 2025-04-10T23:09:32 |
https://youtube.com/watch?v=aiISDmnODzo&si=VfvcRewzHlKKOevW
|
SlingingBits
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwbk9h
| false |
{'oembed': {'author_name': 'Slinging Bits', 'author_url': 'https://www.youtube.com/@SlingingBits', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aiISDmnODzo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/aiISDmnODzo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jwbk9h
|
/r/LocalLLaMA/comments/1jwbk9h/llama4maverick17b128einstruct_benchmark_mac/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=108&crop=smart&auto=webp&s=6df91bd22dd2fcec3b9ca7bfc31e962f7be41159', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=216&crop=smart&auto=webp&s=b4d59d45f34ce13c50114af14bde33e07e93b49e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=320&crop=smart&auto=webp&s=e37d54358c31abda84735b8860c3c3360b3c3003', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?auto=webp&s=3d8919ce998a2bcc2af54713551424e24351bc1c', 'width': 480}, 'variants': {}}]}
|
|
Mistral hasn't released a big model in ages.
| 173 |
How about a new version of MoE that can put the LLama4 to shame? Hopefully something with less than 120B params total.
Or a new version of Mistral large. Or a Mistral Medium (30-40B range)
| 2025-04-10T23:46:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwcbfm/mistral_hasnt_released_a_big_model_in_ages/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwcbfm
| false | null |
t3_1jwcbfm
|
/r/LocalLLaMA/comments/1jwcbfm/mistral_hasnt_released_a_big_model_in_ages/
| false | false |
self
| 173 | null |
Can the AnythingLLM Developer API (Open AI compatible) use @agent?
| 1 |
I’m adding support for AnythingLLM to my iOS LLM chat client, [3sparks Chat](https://www.3sparks.net/). It works, but I can’t trigger agents from the API. AnythingLLM uses scraped documents and websites when chatting, but I can’t use web search or web scraping over the API. Can I send \`@agent\` requests via the OpenAI compatible API?
| 2025-04-10T23:50:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwcdzf/can_the_anythingllm_developer_api_open_ai/
|
s3bastienb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwcdzf
| false | null |
t3_1jwcdzf
|
/r/LocalLLaMA/comments/1jwcdzf/can_the_anythingllm_developer_api_open_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'JBV9pWSgil9xw6Jr2GRPgVrFsgBS45fNlCweI2Pv35w', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?width=108&crop=smart&auto=webp&s=0e8f9cff4ee5864a11a9713019b1f06ce8a864cc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?width=216&crop=smart&auto=webp&s=b20de2c2f3835f3734b3800616c6b28081b0dd72', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?width=320&crop=smart&auto=webp&s=b106328d7ac24b64d042f019c225a0b6f418d2db', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?width=640&crop=smart&auto=webp&s=3d0d9d20f619a4776317910d91936e28d19cde06', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?width=960&crop=smart&auto=webp&s=87a9e3be7908d8def8f0969e3088b76b1a649c86', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/sLTGe9vjxN9UsRSszZL0RjFLpo-S2GvivPT9fcXCBek.jpg?auto=webp&s=254c9f883ea890075b5233ea8dc7f92c38e5dfd1', 'width': 1024}, 'variants': {}}]}
|
Help with my first local AI in LM Studio
| 1 |
[removed]
| 2025-04-11T00:56:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwdob9/help_with_my_first_local_ai_in_lm_studio/
|
luspace123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwdob9
| false | null |
t3_1jwdob9
|
/r/LocalLLaMA/comments/1jwdob9/help_with_my_first_local_ai_in_lm_studio/
| false | false |
self
| 1 | null |
Open source, when?
| 594 | 2025-04-11T01:24:41 |
Specter_Origin
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwe7pb
| false | null |
t3_1jwe7pb
|
/r/LocalLLaMA/comments/1jwe7pb/open_source_when/
| false | false | 594 |
{'enabled': True, 'images': [{'id': 'eC-sW0haenYrDJb5BLBh_cTaPxr5CbeNJ2Cpht03hU0', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=108&crop=smart&auto=webp&s=750abe913485093d2588ed112dce7e80de16f2db', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=216&crop=smart&auto=webp&s=05cd262bae87db293391a9a9d646abb5a6a98de9', 'width': 216}, {'height': 359, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=320&crop=smart&auto=webp&s=2ec4ba6a3e6c828bafc019c9ab711bb4d0247ca3', 'width': 320}, {'height': 718, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=640&crop=smart&auto=webp&s=b9fad36429a8d9f30a62e3e07da681ffb9be6ef5', 'width': 640}, {'height': 1078, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=960&crop=smart&auto=webp&s=020dd6fcb3b8ef9fe8b58d5efd2935dbb678735e', 'width': 960}, {'height': 1213, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?width=1080&crop=smart&auto=webp&s=52dfc7f947fb00e19369c0a3bb0d394714615cc9', 'width': 1080}], 'source': {'height': 1330, 'url': 'https://preview.redd.it/qg5a1njiy3ue1.png?auto=webp&s=772d073bc340471449a888da200903661a772298', 'width': 1184}, 'variants': {}}]}
|
|||
I tell you why people are using OpenRouter
| 0 |
Why:
\- Openrouter is well integrated into many chat clients, tools, and probably tested
\- Can use many models at the same time
\- Have a layer on top of the api provider to fix issues
If this is Deepinfra, but coming from Openrouter it does not have the same bug as I try using fetch MCP in the image.
At some point, I just gave up and just use openrouter because it's better integrated compared to individual provider.
| 2025-04-11T02:07:47 |
Kooky-Somewhere-2883
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwf101
| false | null |
t3_1jwf101
|
/r/LocalLLaMA/comments/1jwf101/i_tell_you_why_people_are_using_openrouter/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'aGax2DZ4dXYmDO9tNq5Og_va6p_wNErS2Q3hPwU64DE', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=108&crop=smart&auto=webp&s=fa611aa9118f6ea87a4dbdc2707aee23f664d65e', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=216&crop=smart&auto=webp&s=038b5ddc43056fa933c5dc1a0095fbc29e89f723', 'width': 216}, {'height': 345, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=320&crop=smart&auto=webp&s=8b46b2270b9d15fa7113e5a2a46dd7a4b8aedbcd', 'width': 320}, {'height': 690, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=640&crop=smart&auto=webp&s=832ae27b8549fcf2ff1406f757f9b024681824c0', 'width': 640}, {'height': 1035, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=960&crop=smart&auto=webp&s=cbaca35f7a717f455fb9807146cd29eeec1a2389', 'width': 960}, {'height': 1164, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?width=1080&crop=smart&auto=webp&s=9fbb0919be8bf65b0f44966334e1cf35f430812d', 'width': 1080}], 'source': {'height': 2060, 'url': 'https://preview.redd.it/65w8v0dn54ue1.png?auto=webp&s=d6e8b5c174702d64c38fadcdea4f156da460d80b', 'width': 1910}, 'variants': {}}]}
|
||
I don't want thinking models
| 0 |
They don't use tools
| 2025-04-11T02:15:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwf69x/i_dont_want_thinking_models/
|
Osama_Saba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwf69x
| false | null |
t3_1jwf69x
|
/r/LocalLLaMA/comments/1jwf69x/i_dont_want_thinking_models/
| false | false |
self
| 0 | null |
Have anyone tried running Grok on Mac Studio?
| 0 |
Please share your experience. Thank you!
| 2025-04-11T02:21:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwfahl/have_anyone_tried_running_grok_on_mac_studio/
|
Southern_Sun_2106
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfahl
| false | null |
t3_1jwfahl
|
/r/LocalLLaMA/comments/1jwfahl/have_anyone_tried_running_grok_on_mac_studio/
| false | false |
self
| 0 | null |
Manga Image Translator Site is not working
| 1 |
[removed]
| 2025-04-11T02:27:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwfe73/manga_image_translator_site_is_not_working/
|
Defiant-Ordinary-260
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfe73
| false | null |
t3_1jwfe73
|
/r/LocalLLaMA/comments/1jwfe73/manga_image_translator_site_is_not_working/
| false | false |
self
| 1 | null |
How to use a markdown filebase to add to llms training/memory?
| 2 |
Hey LocalLLaMA! I started playing around with some llms at work and had some curiosity into how I could localhost a model that "knows" everything in my obsidian vault.
I'd like to know if it's possible and where I'd find a good resource to start figuring out how to make it happen, even how to find good models to start with.
Anyone have suggestions or recommendations?
| 2025-04-11T02:32:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwfhp2/how_to_use_a_markdown_filebase_to_add_to_llms/
|
educational_escapism
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfhp2
| false | null |
t3_1jwfhp2
|
/r/LocalLLaMA/comments/1jwfhp2/how_to_use_a_markdown_filebase_to_add_to_llms/
| false | false |
self
| 2 | null |
Is there a way to fine tune Kokoro?
| 5 |
I would like to use it emotion-controlled, so I would like to fine tune it. (If you know of any other small, effective models that already have emotion management, let me know please (e.g. [angry]"Baka shut up")
| 2025-04-11T02:43:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwfoy8/is_there_a_way_to_fine_tune_kokoro/
|
Gorgooo_61
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfoy8
| false | null |
t3_1jwfoy8
|
/r/LocalLLaMA/comments/1jwfoy8/is_there_a_way_to_fine_tune_kokoro/
| false | false |
self
| 5 | null |
Anybody else training offline agents with an offline LLM?
| 0 |
Emotional logging needs work. Broke the stack trying to debug that gonna circle back. Having trouble with imports. If anybody has any advice definitely would appreciate it. Took me forever just to get her to properly log data in the vector db. This is day 5 I think of the project.
| 2025-04-11T02:44:40 |
XDAWONDER
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfpqv
| false | null |
t3_1jwfpqv
|
/r/LocalLLaMA/comments/1jwfpqv/anybody_else_training_offline_agents_with_an/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '0Z4WxeksNAkl2pUOHChA_dbfWOsgTcujxtLLFA34Dwk', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=108&crop=smart&auto=webp&s=22fc1fe2d01355ce4c0f17cc5e12e25cc5c189cf', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=216&crop=smart&auto=webp&s=966e9361b2004c59e8b421dae0a7f3c7ef909ddf', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=320&crop=smart&auto=webp&s=15c4f496a017493ec2ecb5a64551c33fa5c135c2', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=640&crop=smart&auto=webp&s=32da4dd2f0cc63bfc5637bfdb1740eb734e7e750', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=960&crop=smart&auto=webp&s=9ee608267a165e1f02cbf37631123a1afcdb2e97', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?width=1080&crop=smart&auto=webp&s=247b95f1766264b66c72f0b17be25b90e775725d', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/m0m57h9uc4ue1.jpeg?auto=webp&s=2958100d6b1349b5f192aeb21ef3d6c2de9ea05b', 'width': 4032}, 'variants': {}}]}
|
||
Always Be Evaluating
| 2 |
Oh, have I got your attention now?
Good.
It's never been less apparent which model is best for your next experiment.
Benchmarks are bunk, the judges... a joke.
Raters are NOT users.
The only eval that matters: impact on users and business.
In this substack post: [https://remyxai.substack.com/p/always-be-evaluating](https://remyxai.substack.com/p/always-be-evaluating)
We discuss a robust offline evaluation workflow which sets you up for continuous improvements to your AI application.
https://preview.redd.it/0av6lltfb4ue1.png?width=1024&format=png&auto=webp&s=87e180fab5b388cecb840142c23f7bb6d4a6573c
| 2025-04-11T02:48:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwfsdn/always_be_evaluating/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwfsdn
| false | null |
t3_1jwfsdn
|
/r/LocalLLaMA/comments/1jwfsdn/always_be_evaluating/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'bRHxVoGVtfwB_CmbHKqX292sO15SeIzbL90YxZ5ufNY', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?width=108&crop=smart&auto=webp&s=e44fb5f6e41b69d720ecacb46562a7999e954036', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?width=216&crop=smart&auto=webp&s=abfa710c5ebbea1ff82d2779add6c8edc18f2dc8', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?width=320&crop=smart&auto=webp&s=e5161e62f03dbe1ee70bf2eda2b6ee64b6cc7a15', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?width=640&crop=smart&auto=webp&s=f476c525561568147b55f6f08e238355ae4402e9', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?width=960&crop=smart&auto=webp&s=bff7707f5a72f5fcad16b7e561f3867199fead15', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BAbgQc9d8Vl7-NtLgLMhdfLIvXJTCjpk4Y0S8PYi38g.jpg?auto=webp&s=0dcff442a972232a3c8c78bcfd4b888bd48cbd38', 'width': 1024}, 'variants': {}}]}
|
|
Kimi-VL Technical Report
| 9 |
Abstract
>We present Kimi-VL, an efficient open-source Mixture-of-Experts (MoE) vision-language model (VLM) that offers advanced multimodal reasoning, long-context understanding, and strong agent capabilities - all while activating only 2.8B parameters in its language decoder (Kimi-VL-A3B). Kimi-VL demonstrates strong performance across challenging domains: as a general-purpose VLM, Kimi-VL excels in multi-turn agent tasks (e.g., OSWorld), matching flagship models. Furthermore, it exhibits remarkable capabilities across diverse challenging vision language tasks, including college-level image and video comprehension, OCR, mathematical reasoning, and multi-image understanding. In comparative evaluations, it effectively competes with cutting-edge efficient VLMs such as GPT-4o-mini, Qwen2.5-VL-7B, and Gemma-3-12B-IT, while surpassing GPT-4o in several key domains. Kimi-VL also advances in processing long contexts and perceiving clearly. With a 128K extended context window, Kimi-VL can process diverse long inputs, achieving impressive scores of 64.5 on LongVideoBench and 35.1 on MMLongBench-Doc. Its native-resolution vision encoder, MoonViT, further allows it to see and understand ultra-high-resolution visual inputs, achieving 83.2 on InfoVQA and 34.5 on ScreenSpot-Pro, while maintaining lower computational cost for common tasks. Building upon Kimi-VL, we introduce an advanced long-thinking variant: Kimi-VL-Thinking. Developed through long chain-of-thought (CoT) supervised fine-tuning (SFT) and reinforcement learning (RL), this model exhibits strong long-horizon reasoning capabilities. It achieves scores of 61.7 on MMMU, 36.8 on MathVision, and 71.3 on MathVista while maintaining the compact 2.8B activated LLM parameters, setting a new standard for efficient multimodal thinking models. Code and models are publicly accessible at [https://github.com/MoonshotAI/Kimi-VL](https://github.com/MoonshotAI/Kimi-VL).
.
| 2025-04-11T03:18:51 |
https://arxiv.org/abs/2504.07491
|
ninjasaid13
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwgbxe
| false | null |
t3_1jwgbxe
|
/r/LocalLLaMA/comments/1jwgbxe/kimivl_technical_report/
| false | false |
default
| 9 | null |
Is there any windows app made to get the most out of LLm by making it be at the top layer of any other app, in other words modify windows and replace the buultin AI of windows while offering even more features?
| 0 |
Yes local llm is good but so far it seem its usage has become linited to running it within a closes environment. Is any app made to help you bring the power of your ai out of its client app?
Lets say, your ai is running at the rop of every other app in windows (or linux) and if any writing app is open and you are typing there, your AI automatically offers grammar correction or paraphrase. This is just one simple example. But i mean we are not suing the whole power of llm now. There is much more potential.
| 2025-04-11T03:33:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwgle9/is_there_any_windows_app_made_to_get_the_most_out/
|
ExtremePresence3030
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwgle9
| false | null |
t3_1jwgle9
|
/r/LocalLLaMA/comments/1jwgle9/is_there_any_windows_app_made_to_get_the_most_out/
| false | false |
self
| 0 | null |
is nope_layer_interval missing from config?
| 1 |
I've been familiarizing myself with llama 4 architecture bit by bit and noticed I can't find `nope_layer_interval` being set anywhere, which would mean it defaults to disabled, I think? Can't find any value when [searching the github repo](https://github.com/search?q=repo%3Ameta-llama%2Fllama-models%20nope_layer_interval&type=code) or in the [config.json](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct/blob/main/config.json) I've checked yet. Am I missing it somewhere? Is NoPE unused or is this indicating a config oversight?
| 2025-04-11T03:52:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwgxb1/is_nope_layer_interval_missing_from_config/
|
phree_radical
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwgxb1
| false | null |
t3_1jwgxb1
|
/r/LocalLLaMA/comments/1jwgxb1/is_nope_layer_interval_missing_from_config/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'LtKtt6txZ-QrVhG54gL73uTj3IfQTG0w8wIIdqF-0s0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=108&crop=smart&auto=webp&s=f1a83fdddaddde93eaef101228f9b29048246dc3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=216&crop=smart&auto=webp&s=d4835cc298fc9315c676ba0083c28e3ecc4b3ea3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=320&crop=smart&auto=webp&s=e378760f26672dec8cfb3fdd0b39150134ad2f06', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=640&crop=smart&auto=webp&s=d7b0f3059faf83cc71a5c5ea692748e0630dc6bd', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=960&crop=smart&auto=webp&s=86b6adf32262d8f8d6f075ed19cf11eb88ff2392', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?width=1080&crop=smart&auto=webp&s=eab7e1d537052c5bc4bd3be68fcf701fffd965db', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/YMPZTYosGtHGXv1uS9w78kPJEjf83SgwfzqnRn2z1ug.jpg?auto=webp&s=42118dbbd4702474126fe588763f8adf0b8be142', 'width': 1200}, 'variants': {}}]}
|
Twitter bot
| 1 |
[removed]
| 2025-04-11T04:05:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwh5jg/twitter_bot/
|
ProfessionalLow8814
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwh5jg
| false | null |
t3_1jwh5jg
|
/r/LocalLLaMA/comments/1jwh5jg/twitter_bot/
| false | false |
self
| 1 | null |
I fine-tuned CSM to make it always speak in whisper.
| 124 |
Hello, LocalLLaMA!
Recently, I've been looking closely at the Sesame's CSM-1b model. Although there were a lot of controversies around it, I believe it's one of the strongest TTS-like models open-source has along with Orpheus, especially with context awareness!
With [an amazing PR](https://github.com/senstella/csm-mlx/pull/10) to my CSM repository, contributors and I made CSM SFT fine-tunable on Mac, and ran a short fine-tune with my MacBook Air M2! (Around 40 samples) The result is pretty good - it generates a consistent whisper voice quite nicely.
[Here's a quick sample.](https://huggingface.co/senstella/csm-expressiva-1b/resolve/main/assets/demo.wav)
[Model Page](https://huggingface.co/senstella/csm-expressiva-1b)
There's a lot of room for improvement though. First of all, it just goes through SFT-phase, not RL-phase. I plan to quickly implement KTO and giving another shot on top of this model to further improve the stability of the model.
Hope you like it!
| 2025-04-11T04:18:05 |
https://huggingface.co/senstella/csm-expressiva-1b
|
PresentationSame1738
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwhdkx
| false | null |
t3_1jwhdkx
|
/r/LocalLLaMA/comments/1jwhdkx/i_finetuned_csm_to_make_it_always_speak_in_whisper/
| false | false | 124 |
{'enabled': False, 'images': [{'id': 'KdtMYoxZdaajvt2bZJDXj6g0cdnGoaapkcmDm70CmfM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=108&crop=smart&auto=webp&s=0699f1caad9ec3ac2bbb8c9e0ce9c5d605b350fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=216&crop=smart&auto=webp&s=e4480508cbf26a621fbfb44bfb7059406d37f304', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=320&crop=smart&auto=webp&s=5f1050efaf916f1f6ee4d3fad361c004b1666e22', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=640&crop=smart&auto=webp&s=873cf686579af675426ef66ed8d2c3b12ce13cf2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=960&crop=smart&auto=webp&s=6b0b4a603257e80bac25711a4ff49687e748c5c3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?width=1080&crop=smart&auto=webp&s=c6ad4f5dd4dbdb877d81d50cd61e99e703298087', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0LClg4PDbRTHdYfFBDvXnXLwRGccbfNGftgSZaaDPzs.jpg?auto=webp&s=af68ce1c8e096ce4d0b4eeb126293ca9733832d1', 'width': 1200}, 'variants': {}}]}
|
|
DeepCoder 14B vs Qwen2.5 Coder 32B vs QwQ 32B
| 150 |
So, I ran a quick test to compare the coding ability between the 3 models that was known for good coding performance:
1. DeepCoder 14B
2. Qwen2.5 Coder 32B
3. QwQ 32B
Here's the prompt:
use HTML5 canvas, create a bouncing ball in a hexagon demo, there’s a hexagon shape, and a ball inside it, the hexagon will slowly rotate clockwise, under the physic effect, the ball will fall down and bounce when it hit the edge of the hexagon. also, add a button to reset the game as well.
All models are given just one shot to try, no follow up asking. And in the end, I also test with o3-mini to see which one has a closer result.
First, this is what o3-mini implemented:
https://reddit.com/link/1jwhp26/video/lvi4eug9o4ue1/player
This is how DeepCoder 14B do it, pretty close, but it's not working, it also implemented the Reset button wrong (click on it will make the hexagon rotate faster 😒, not reset the game).
https://reddit.com/link/1jwhp26/video/2efz73ztp4ue1/player
Qwen2.5 Coder 32B was able to implement the Reset button right, and the ball are moving, but not bouncing.
https://reddit.com/link/1jwhp26/video/jiai2kgjs4ue1/player
QwQ 32B thought for 17 minutes, and then flop 😆
https://reddit.com/link/1jwhp26/video/s0vsid57v4ue1/player
Conclusion:
Qwen2.5 Coder 32B is still a better choice for coding, and it's not prime time for a 14B model yet.
Also, I know it's a bit unfair to compare a 32B model with a 14B one, but DeepCoder ranked among o3-mini, so why not? I also tried comparing it with Qwen2.5 Coder 14B, but it generated invalid code. To be fair, Qwen didn't even focus on styling, and it's true that DeepCoder got the style closer to o3-mini, but not the functionality :D
| 2025-04-11T04:37:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwhp26/deepcoder_14b_vs_qwen25_coder_32b_vs_qwq_32b/
|
bobaburger
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwhp26
| false | null |
t3_1jwhp26
|
/r/LocalLLaMA/comments/1jwhp26/deepcoder_14b_vs_qwen25_coder_32b_vs_qwq_32b/
| false | false |
self
| 150 | null |
VRAM 16GB Enough for RooCode/VS Code?
| 3 |
TLDR: Will 16GB VRAM on 5060Ti be enough for tasks with long text/advanced coding?
I have a 13500 with GTX 1070 8GB VRAM running in a Proxmox machine.
Ive been using Qwen2.5:7b for web developement within VSCode (via Continue).
The problem I have is the low amount of info it can process. I feel like there's not enough context and its choking on data.
Example:
I gave it a big text (3 pages of word document) told it to apply h1/h2/h3/p paragraphs.
It did apply the code to text, but missed 50% of the text.
Should I drop 700 CAD on 5060Ti 16GB or wait for 5080ti 24GB?
| 2025-04-11T04:38:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwhpow/vram_16gb_enough_for_roocodevs_code/
|
grabber4321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwhpow
| false | null |
t3_1jwhpow
|
/r/LocalLLaMA/comments/1jwhpow/vram_16gb_enough_for_roocodevs_code/
| false | false |
self
| 3 | null |
Arch-Function-Chat Trending #1 on HuggingFace!
| 61 |
So thrilled to share that the work we build with the community here has such a large impact. Just wanted to say thanks. And I'll leave the links in the comments if someone wants to explore further.
| 2025-04-11T04:49:20 |
AdditionalWeb107
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwhvnv
| false | null |
t3_1jwhvnv
|
/r/LocalLLaMA/comments/1jwhvnv/archfunctionchat_trending_1_on_huggingface/
| false | false | 61 |
{'enabled': True, 'images': [{'id': 'ad1oHGnezgL3-j8ThYkhWnPQLCxUQpl7zHGAN_XPtNA', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=108&crop=smart&auto=webp&s=d1822e1a3bf24f070acfbad718bd6b70b0563f34', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=216&crop=smart&auto=webp&s=bbbb6ee48907d533f08f120dcb364b9002761d43', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=320&crop=smart&auto=webp&s=8d078534840ca0f5b460b25eae82e7917840ad63', 'width': 320}, {'height': 700, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=640&crop=smart&auto=webp&s=a57faea18fe56dbe618d675c781667e6d76fcf25', 'width': 640}, {'height': 1051, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=960&crop=smart&auto=webp&s=8a216fa99acf7b8e3fa962e6345280cb88c0ce4d', 'width': 960}, {'height': 1182, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?width=1080&crop=smart&auto=webp&s=21f3f795fc74ffd2c871167532242a6397c7e4ba', 'width': 1080}], 'source': {'height': 1316, 'url': 'https://preview.redd.it/aps11mcty4ue1.png?auto=webp&s=91875876357525f13b9cdfe98b22ec1926ea2527', 'width': 1202}, 'variants': {}}]}
|
||
Llama 4 Maverick is #32 on Chatbot Arena
| 1 |
[removed]
| 2025-04-11T04:59:20 |
https://www.reddit.com/gallery/1jwi19u
|
cameheretoposthis
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwi19u
| false | null |
t3_1jwi19u
|
/r/LocalLLaMA/comments/1jwi19u/llama_4_maverick_is_32_on_chatbot_arena/
| false | false | 1 | null |
|
[Help] In search of Multimodal AI Solution for Video Tutorial Analysis
| 1 |
[removed]
| 2025-04-11T05:30:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwiilb/help_in_search_of_multimodal_ai_solution_for/
|
rageagainistjg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwiilb
| false | null |
t3_1jwiilb
|
/r/LocalLLaMA/comments/1jwiilb/help_in_search_of_multimodal_ai_solution_for/
| false | false |
self
| 1 | null |
Olympic coder.
| 0 |
Heyyo,
I've come across this model called olympic coder. I'm currently running the 7B version on an M2 Pro.
It's apparently finetuned on IOI excercises and is incredibly verbose. It might be making shit up I haven't been able to test it but what is your take?
I have written together a 1600 token coding task for it (I have a big data algorithm optimisation problem) and it's been computing now for 6 hours :D.
The previous simplified version of the problem was spit out in 45 mins and it seemed pretty good for a 7B model.
Honestly it feels like a test time scaling model (if I understand test time scaling correctly).
Do you know of any other models that might be this incredibly verbose and computes this long on the altar of accuracy?
| 2025-04-11T05:35:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwilcx/olympic_coder/
|
randoomkiller
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwilcx
| false | null |
t3_1jwilcx
|
/r/LocalLLaMA/comments/1jwilcx/olympic_coder/
| false | false |
self
| 0 | null |
In search of multimodal AI for video analysis
| 1 |
[removed]
| 2025-04-11T05:45:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwiqjo/in_search_of_multimodal_ai_for_video_analysis/
|
rageagainistjg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwiqjo
| false | null |
t3_1jwiqjo
|
/r/LocalLLaMA/comments/1jwiqjo/in_search_of_multimodal_ai_for_video_analysis/
| false | false |
self
| 1 | null |
ZClip: Adaptive Spike Mitigation for LLM Pre-Training.
| 1 | 2025-04-11T05:59:11 |
akanyaani
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwixbt
| false | null |
t3_1jwixbt
|
/r/LocalLLaMA/comments/1jwixbt/zclip_adaptive_spike_mitigation_for_llm/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '4zFSpwuHsGgFU-aNHpishBrIjMhxFxubrlJiGhWjEZM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3u6h076cb5ue1.png?width=108&crop=smart&auto=webp&s=6412850b3008e7f518803deb6dfbf3d8449bd07f', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/3u6h076cb5ue1.png?width=216&crop=smart&auto=webp&s=dd8bdd810d5dfc4f5f6ce3c661acf2e9b78b65cb', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/3u6h076cb5ue1.png?width=320&crop=smart&auto=webp&s=1dfff4affefe2303bb6e65afe1cfd6d40a7ebd60', 'width': 320}, {'height': 355, 'url': 'https://preview.redd.it/3u6h076cb5ue1.png?width=640&crop=smart&auto=webp&s=b3d0ab4f7aaf1213acbfc50ea5636c7d3c43d9fd', 'width': 640}], 'source': {'height': 376, 'url': 'https://preview.redd.it/3u6h076cb5ue1.png?auto=webp&s=7579592b97ab67b223853a1e92e4ae1d55d390d6', 'width': 676}, 'variants': {}}]}
|
|||
ZClip: Adaptive Spike Mitigation for LLM Pre-Training.
| 1 |
[removed]
| 2025-04-11T06:00:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwiy6b/zclip_adaptive_spike_mitigation_for_llm/
|
akanyaani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwiy6b
| false | null |
t3_1jwiy6b
|
/r/LocalLLaMA/comments/1jwiy6b/zclip_adaptive_spike_mitigation_for_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'sXuNRzgE_m3OyJPUYOM5g1I5cCOwKdjwUhYpK8M96I0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=108&crop=smart&auto=webp&s=22281dfeade15138c65d0fb2ad54f88a536fc3d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=216&crop=smart&auto=webp&s=1ffaafd82602d94941b28be0b8f83a88132a0090', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=320&crop=smart&auto=webp&s=c68b37de113bd63ac8666cc714899f95f246be89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=640&crop=smart&auto=webp&s=274d183e9b355a70139984302f1b6d5200ca2c77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=960&crop=smart&auto=webp&s=ce5fa3de95e9678af362eba018b21c926e35bb99', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=1080&crop=smart&auto=webp&s=a3662c769f04553656e8662013978447cf03f614', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?auto=webp&s=4bfd90ade0743a669b69118cc8abd97f5cf43d5f', 'width': 1200}, 'variants': {}}]}
|
Lmarena.ai boots off llama4 from leaderboard
| 204 |
[https://lmarena.ai/?leaderboard](https://lmarena.ai/?leaderboard)
Related discussion: [https://www.reddit.com/r/LocalLLaMA/comments/1ju5aux/lmarenaai\_confirms\_that\_meta\_cheated/](https://www.reddit.com/r/LocalLLaMA/comments/1ju5aux/lmarenaai_confirms_that_meta_cheated/)
| 2025-04-11T06:01:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwiye4/lmarenaai_boots_off_llama4_from_leaderboard/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwiye4
| false | null |
t3_1jwiye4
|
/r/LocalLLaMA/comments/1jwiye4/lmarenaai_boots_off_llama4_from_leaderboard/
| false | false |
self
| 204 | null |
ZClip: Adaptive Spike Mitigation for LLM Pre-Training.
| 1 |
[removed]
| 2025-04-11T06:02:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwiz6y/zclip_adaptive_spike_mitigation_for_llm/
|
akanyaani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwiz6y
| false | null |
t3_1jwiz6y
|
/r/LocalLLaMA/comments/1jwiz6y/zclip_adaptive_spike_mitigation_for_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'sXuNRzgE_m3OyJPUYOM5g1I5cCOwKdjwUhYpK8M96I0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=108&crop=smart&auto=webp&s=22281dfeade15138c65d0fb2ad54f88a536fc3d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=216&crop=smart&auto=webp&s=1ffaafd82602d94941b28be0b8f83a88132a0090', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=320&crop=smart&auto=webp&s=c68b37de113bd63ac8666cc714899f95f246be89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=640&crop=smart&auto=webp&s=274d183e9b355a70139984302f1b6d5200ca2c77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=960&crop=smart&auto=webp&s=ce5fa3de95e9678af362eba018b21c926e35bb99', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=1080&crop=smart&auto=webp&s=a3662c769f04553656e8662013978447cf03f614', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?auto=webp&s=4bfd90ade0743a669b69118cc8abd97f5cf43d5f', 'width': 1200}, 'variants': {}}]}
|
LLM-Powered Telegram Bot Project
| 1 |
[removed]
| 2025-04-11T06:18:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwj74m/llmpowered_telegram_bot_project/
|
Tough-Clue-4566
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwj74m
| false | null |
t3_1jwj74m
|
/r/LocalLLaMA/comments/1jwj74m/llmpowered_telegram_bot_project/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '7DYT6NJOD51scsxecKbOcOd4NF6z0oNx4f-67tFTHWs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=108&crop=smart&auto=webp&s=8ed8211a20e76a2f23bf06352e82c97894c3dd64', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=216&crop=smart&auto=webp&s=ab813be0185d0128cbd4f83a322f94c8b88b921e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=320&crop=smart&auto=webp&s=b0e515a49657d4db45be220059f5adc0409420c8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=640&crop=smart&auto=webp&s=035ba961c3623008dcccd31c8dde7920f68d50a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=960&crop=smart&auto=webp&s=f9a2f1410158e4fe98104b4f08df5b7e1ffc2ff6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?width=1080&crop=smart&auto=webp&s=06918e2a1624e2649d9465d08f52916dd41d26e1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2hvpZMyph216nmaL7IaaLsTAjGcx767k4S72ggcMA8o.jpg?auto=webp&s=6487b558d70973e158036f423a84741b04a64f89', 'width': 1200}, 'variants': {}}]}
|
Lmarena benchmaxxing
| 1 |
[removed]
| 2025-04-11T06:22:07 |
https://www.reddit.com/gallery/1jwj91c
|
Ok-Abroad2889
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwj91c
| false | null |
t3_1jwj91c
|
/r/LocalLLaMA/comments/1jwj91c/lmarena_benchmaxxing/
| false | false | 1 | null |
|
LLama4Reasoning.Com
| 0 |
LLama4Reasoning.Com IS COMING SO SOON BY METAAI
| 2025-04-11T06:23:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwj9ms/llama4reasoningcom/
|
Open_Needleworker_14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwj9ms
| false | null |
t3_1jwj9ms
|
/r/LocalLLaMA/comments/1jwj9ms/llama4reasoningcom/
| false | false |
self
| 0 | null |
Google benchmaxxing LmArena according to openai dev
| 1 |
[removed]
| 2025-04-11T06:27:13 |
https://www.reddit.com/gallery/1jwjbij
|
Ok-Abroad2889
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwjbij
| false | null |
t3_1jwjbij
|
/r/LocalLLaMA/comments/1jwjbij/google_benchmaxxing_lmarena_according_to_openai/
| false | false | 1 | null |
|
Continual Knowledge Circuits
| 8 |
[https://github.com/zjunlp/dynamicknowledgecircuits](https://github.com/zjunlp/dynamicknowledgecircuits)
Has anyone played with Knowledge Circuits? This one seems crazy, am I right in understanding that it is continually training the model as it consume knowledge?
| 2025-04-11T07:41:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwkc8v/continual_knowledge_circuits/
|
itchykittehs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwkc8v
| false | null |
t3_1jwkc8v
|
/r/LocalLLaMA/comments/1jwkc8v/continual_knowledge_circuits/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': 'SetMgMfbCXjWv8RbxqssQWwbDcdOgNtBZgOTt76RFto', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=108&crop=smart&auto=webp&s=bdbb30c3d9029892a58bcea24ed3c56c680f3843', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=216&crop=smart&auto=webp&s=f2bb1c95700a6e4f83172176b3a9cf4273638c2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=320&crop=smart&auto=webp&s=339a422bc9f4a9998299f7365fba9249100523d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=640&crop=smart&auto=webp&s=38ca8383bfb926975d2e9b7deb65603703afe526', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=960&crop=smart&auto=webp&s=907c63795154048d8920f365cb111f739d2a690e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?width=1080&crop=smart&auto=webp&s=51315c4b61abffdfb4453a18e129c685add7dd96', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CcvV__N2CPzY4mIIuQZap4XURkPm7a6WBhLoTbhipVY.jpg?auto=webp&s=0194a2d24d99ac09a2f316f5724011ca238d2bb4', 'width': 1200}, 'variants': {}}]}
|
How is your experience with local models and Cline, Roo, Goose or Openhands ?
| 1 |
[removed]
| 2025-04-11T07:49:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwkfpl/how_is_your_experience_with_local_models_and/
|
Low-Woodpecker-4522
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwkfpl
| false | null |
t3_1jwkfpl
|
/r/LocalLLaMA/comments/1jwkfpl/how_is_your_experience_with_local_models_and/
| false | false |
self
| 1 | null |
Local Multi Modal Embedding
| 1 |
[removed]
| 2025-04-11T08:08:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwkoyc/local_multi_modal_embedding/
|
disinton
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwkoyc
| false | null |
t3_1jwkoyc
|
/r/LocalLLaMA/comments/1jwkoyc/local_multi_modal_embedding/
| false | false |
self
| 1 | null |
LLM warning
| 0 |
Quit using these things reason it’s not open source is to data mine your thoughts and control the narratives anything they want you to believe. Using it like a mental therapist worst fucking thing you could be doing. Start demanding these tech companies to open source all of it and stop using their platforms. Harvesting your mind. To control you.
| 2025-04-11T08:25:25 |
Joeycan2AI
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwkwtt
| false | null |
t3_1jwkwtt
|
/r/LocalLLaMA/comments/1jwkwtt/llm_warning/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '7PggxZeiQ2IA-Hvj07hZV7lpftLwoL0kcD1F9oYrpmI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/mgas8r3n16ue1.jpeg?width=108&crop=smart&auto=webp&s=0fa6183396927253bb3c54c81e0e55ddf0c85003', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/mgas8r3n16ue1.jpeg?width=216&crop=smart&auto=webp&s=d1f486712cbb6a8bbec622a824e41cfc7ce9a6e0', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/mgas8r3n16ue1.jpeg?width=320&crop=smart&auto=webp&s=92b9b276c884e77ec5c93e3be6f6286491edfc53', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/mgas8r3n16ue1.jpeg?width=640&crop=smart&auto=webp&s=d4c2ef6865df75ee15a16789bddcfae048da8522', 'width': 640}], 'source': {'height': 1792, 'url': 'https://preview.redd.it/mgas8r3n16ue1.jpeg?auto=webp&s=1f41922e235a18233809be1ee38da1a6dcf91515', 'width': 828}, 'variants': {}}]}
|
||
Do you guys maintain your own private test data to evaluate models?
| 9 |
Just curious to get some feedback about how valuable it is to maintain and test models on your own test data, versus relying on popular benchmark platforms - as there is always a risk the test data leaks into the training data, but also a risk that the test data isn’t a good representation of everybody’s own use cases.
| 2025-04-11T08:36:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwl1vs/do_you_guys_maintain_your_own_private_test_data/
|
Thireus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwl1vs
| false | null |
t3_1jwl1vs
|
/r/LocalLLaMA/comments/1jwl1vs/do_you_guys_maintain_your_own_private_test_data/
| false | false |
self
| 9 | null |
LLM for Micro stories
| 1 |
[removed]
| 2025-04-11T08:45:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwl64o/llm_for_micro_stories/
|
Epictetito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwl64o
| false | null |
t3_1jwl64o
|
/r/LocalLLaMA/comments/1jwl64o/llm_for_micro_stories/
| false | false |
self
| 1 | null |
OpenSource TTS Models/Services
| 1 |
[removed]
| 2025-04-11T08:46:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwl6md/opensource_tts_modelsservices/
|
Queasy_Version4524
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwl6md
| false | null |
t3_1jwl6md
|
/r/LocalLLaMA/comments/1jwl6md/opensource_tts_modelsservices/
| false | false |
self
| 1 | null |
Wouldn't it make sense to use torrent?
| 235 |
It just came to my mind that Huggingface is basically a central point for LLM downloads and hosting. What if we just used torrent to download and "host" LLM files?
This would mean faster downloads and less reliance on one singular organization. Also Huggingface wouldn't need a tremendous amount of bandwidth which probably costs quite a lot. And the best part: Everyone with a home server and some spare bandwidth could contribute and help to keep the system stable.
I'd just like to open a discussion about this topic since I think this might be kind of helpful for both LLM hosters and end consumers.
So, what do you think, does this make sense?
| 2025-04-11T08:59:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlcar/wouldnt_it_make_sense_to_use_torrent/
|
Nightslide1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlcar
| false | null |
t3_1jwlcar
|
/r/LocalLLaMA/comments/1jwlcar/wouldnt_it_make_sense_to_use_torrent/
| false | false |
self
| 235 | null |
GPU recommendation for LLM/Gaming/VR 3000+€ budget
| 1 |
[removed]
| 2025-04-11T09:04:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlf1i/gpu_recommendation_for_llmgamingvr_3000_budget/
|
Ok_Host_7754
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlf1i
| false | null |
t3_1jwlf1i
|
/r/LocalLLaMA/comments/1jwlf1i/gpu_recommendation_for_llmgamingvr_3000_budget/
| false | false |
self
| 1 | null |
GPU recommendation for LLM/AI/VR with 3000+€ budget
| 1 |
[removed]
| 2025-04-11T09:11:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlii5/gpu_recommendation_for_llmaivr_with_3000_budget/
|
Ok_Host_7754
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlii5
| false | null |
t3_1jwlii5
|
/r/LocalLLaMA/comments/1jwlii5/gpu_recommendation_for_llmaivr_with_3000_budget/
| false | false |
self
| 1 | null |
Folks any views on using LLMs like Gemma 3 12b 27b for Embeddings ?
| 2 |
Folks here, I was wondering, if we can use Gemma3 12b (inprinciple we can use) for vectorizing the documents for later search, I know there are opensource embeddings like nomin and all mini. I was just wondering if you guys are open for this discussion? Using embedding models like nomic Vs LLMs like Gemma for embeddings.
| 2025-04-11T09:16:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlkw1/folks_any_views_on_using_llms_like_gemma_3_12b/
|
Leather-Departure-38
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlkw1
| false | null |
t3_1jwlkw1
|
/r/LocalLLaMA/comments/1jwlkw1/folks_any_views_on_using_llms_like_gemma_3_12b/
| false | false |
self
| 2 | null |
Open LLM leaderboard is archived, what are the alternatives?
| 31 |
I want a leaderboard for open-source models; the last one, Open LLM Leaderboard, is now archived. What do you use?
| 2025-04-11T09:18:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwllvz/open_llm_leaderboard_is_archived_what_are_the/
|
Initial_Track6190
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwllvz
| false | null |
t3_1jwllvz
|
/r/LocalLLaMA/comments/1jwllvz/open_llm_leaderboard_is_archived_what_are_the/
| false | false |
self
| 31 | null |
👀 here to hear what's happening. 💪 let's go open source AI!
| 0 | 2025-04-11T09:22:49 |
Severin_Suveren
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlo15
| false | null |
t3_1jwlo15
|
/r/LocalLLaMA/comments/1jwlo15/here_to_hear_whats_happening_lets_go_open_source/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '1FKqdwkGicEY2q8c9RX6u9tCKhIuJYTfpIQ1b3QZqJQ', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?width=108&crop=smart&auto=webp&s=ab6dfaaebc180495352fa7499599066114a049d1', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?width=216&crop=smart&auto=webp&s=f0bc68de9f5ce94645385bca8f2e1f920d5fe2c9', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?width=320&crop=smart&auto=webp&s=c8f3e8601da55eaedf7f1208f66af209fba6c811', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?width=640&crop=smart&auto=webp&s=4ef97409ea0dcc207eb95ddf938d7844ec0d484f', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?width=960&crop=smart&auto=webp&s=40652f6277d2fff999d8ccbb17bf139631a6625e', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/ikjsy3yrb6ue1.png?auto=webp&s=b1b62a02f5ea97e2a285b7b41895379d3a999e8e', 'width': 1024}, 'variants': {}}]}
|
|||
Just newb here!
| 1 |
[removed]
| 2025-04-11T09:30:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlryd/just_newb_here/
|
Character-Sand3378
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlryd
| false | null |
t3_1jwlryd
|
/r/LocalLLaMA/comments/1jwlryd/just_newb_here/
| false | false |
self
| 1 | null |
Paper page - OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training
Tokens
| 80 | 2025-04-11T09:38:02 |
https://huggingface.co/papers/2504.07096
|
ab2377
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlvjs
| false | null |
t3_1jwlvjs
|
/r/LocalLLaMA/comments/1jwlvjs/paper_page_olmotrace_tracing_language_model/
| false | false | 80 |
{'enabled': False, 'images': [{'id': '37FqRfe1b1QryZ8UgDgk1oeTRok0UepdSMKPwtQVgWI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=108&crop=smart&auto=webp&s=592d2b55e3c3345c9820dda9b403e0f1b5f5a3b0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=216&crop=smart&auto=webp&s=daae034742e32674e9f4cbfb7384d2ea31aeb31e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=320&crop=smart&auto=webp&s=edb6fb0f31f16307b8e5b54e0459b6979cbbbfe8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=640&crop=smart&auto=webp&s=d8585c02115148b50c8aa1af8e6bbf364cb541b1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=960&crop=smart&auto=webp&s=b161a0f9b809b548528a1145b0a57a42dcc87bba', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?width=1080&crop=smart&auto=webp&s=5ea42de858453bfede364474954907f58e62338c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VVxJB7KWWo4CLRWMs0X6vQWrqVzjSQnYrxGfyVikjbM.jpg?auto=webp&s=fb704d9414e8c767cc69f35231131710498821de', 'width': 1200}, 'variants': {}}]}
|
||
Meta’s AI research lab is ‘dying a slow death,’ some insiders say—but…
| 299 |
Original paywalled link:
[https://fortune.com/2025/04/10/meta-ai-research-lab-fair-questions-departures-future-yann-lecun-new-beginning](https://fortune.com/2025/04/10/meta-ai-research-lab-fair-questions-departures-future-yann-lecun-new-beginning)
| 2025-04-11T09:42:16 |
https://archive.ph/fY2ND
|
UnforgottenPassword
|
archive.ph
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlxlt
| false | null |
t3_1jwlxlt
|
/r/LocalLLaMA/comments/1jwlxlt/metas_ai_research_lab_is_dying_a_slow_death_some/
| false | false | 299 |
{'enabled': False, 'images': [{'id': 'REwpsL2XJ4OQeSCV_FbzUBjzmHeo2ySO9jzxsgAdXgY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?width=108&crop=smart&auto=webp&s=8f7283a728fb27078a7f1c4a90836a20688efb3c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?width=216&crop=smart&auto=webp&s=0630a1957b82d11804b6915b65f5d18f389bf106', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?width=320&crop=smart&auto=webp&s=8e0f6e51e9cb6864bfc10865f4e2f6d963fe0736', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?width=640&crop=smart&auto=webp&s=c7c50b1f44aaddd11771e00fe683ac087a57f799', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?width=960&crop=smart&auto=webp&s=647ef471270802eb4ae7a1d8496807dd20a33a92', 'width': 960}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/2o1G5emSxIhWAEIHS9O-76Nrl3QaDkBsS0bYLzwXgQI.jpg?auto=webp&s=2f52dd0d1281b1af15a4388a8af9acd30c8632ea', 'width': 1024}, 'variants': {}}]}
|
|
Building and running GenAI models locally just got a massive upgrade 👇
| 1 |
[removed]
| 2025-04-11T09:43:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwlyah/building_and_running_genai_models_locally_just/
|
LemonQuizzy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwlyah
| false | null |
t3_1jwlyah
|
/r/LocalLLaMA/comments/1jwlyah/building_and_running_genai_models_locally_just/
| false | false |
self
| 1 | null |
DeepCoder-14B: Superior Open-Source LLM
| 77 | 2025-04-11T09:59:11 |
https://blog.sonichigo.com/deepcoder-14b-open-source-llm-that-beats-giants
|
sonichigo-1219
|
blog.sonichigo.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwm63r
| false | null |
t3_1jwm63r
|
/r/LocalLLaMA/comments/1jwm63r/deepcoder14b_superior_opensource_llm/
| false | false | 77 |
{'enabled': False, 'images': [{'id': '-V0RQUIHOYlDpw-HyzDor_xletIpE9KheCQ-JJmlI6s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=108&crop=smart&auto=webp&s=25b9c4ea31107ef3bad73dda221a00642253f85d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=216&crop=smart&auto=webp&s=1b9edbfb5353f095c7bd16e49a447e0dfc0ba3ed', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=320&crop=smart&auto=webp&s=0d95b2969f5525e7b6e4eb1072723deeecc84a1c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=640&crop=smart&auto=webp&s=ce17c2dedfda380f36b49232b8ad2df92b800ecb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=960&crop=smart&auto=webp&s=bbea6465f886c473ffa9ee37aaa6077ccd237fdb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?width=1080&crop=smart&auto=webp&s=964b439de5ce0408c23d2a4a2604dd36d7e2de4d', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/gpYKlsAsXO_PBVDFzTm7rzzEIh9LVPBdCM3aQoN3w7Y.jpg?auto=webp&s=9ca410428321b5cfbd58c59b348ffff67c36462f', 'width': 2240}, 'variants': {}}]}
|
||
Can anyone suggest image to text parser for bengali which is opensource?
| 1 |
[removed]
| 2025-04-11T10:44:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwmuk3/can_anyone_suggest_image_to_text_parser_for/
|
Chemical_Analyst_852
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwmuk3
| false | null |
t3_1jwmuk3
|
/r/LocalLLaMA/comments/1jwmuk3/can_anyone_suggest_image_to_text_parser_for/
| false | false |
self
| 1 | null |
Need OpenSource TTS
| 1 |
[removed]
| 2025-04-11T10:46:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwmvnd/need_opensource_tts/
|
Queasy_Version4524
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwmvnd
| false | null |
t3_1jwmvnd
|
/r/LocalLLaMA/comments/1jwmvnd/need_opensource_tts/
| false | false |
self
| 1 | null |
Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)
| 1 |
[removed]
| 2025-04-11T11:09:44 |
https://www.youtube.com/watch?v=aiISDmnODzo
|
SlingingBits
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwn8ly
| false |
{'oembed': {'author_name': 'Slinging Bits', 'author_url': 'https://www.youtube.com/@SlingingBits', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aiISDmnODzo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/aiISDmnODzo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama-4-Maverick-17B-128E-Instruct Benchmark | Mac Studio M3 Ultra (512GB)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jwn8ly
|
/r/LocalLLaMA/comments/1jwn8ly/llama4maverick17b128einstruct_benchmark_mac/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'AF04ur-kDjD5xuP-R3K_853qpif9tm1seO6S1icARP8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=108&crop=smart&auto=webp&s=6df91bd22dd2fcec3b9ca7bfc31e962f7be41159', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=216&crop=smart&auto=webp&s=b4d59d45f34ce13c50114af14bde33e07e93b49e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?width=320&crop=smart&auto=webp&s=e37d54358c31abda84735b8860c3c3360b3c3003', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IxdfUatY2ZdvuPgqEET-1fv62R-9tpl25NV3JAp24YA.jpg?auto=webp&s=3d8919ce998a2bcc2af54713551424e24351bc1c', 'width': 480}, 'variants': {}}]}
|
|
Who tf is qasar alpha?
| 0 |
Who tf is qasar alpha?
| 2025-04-11T12:08:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwo9k9/who_tf_is_qasar_alpha/
|
No_Afternoon_4260
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwo9k9
| false | null |
t3_1jwo9k9
|
/r/LocalLLaMA/comments/1jwo9k9/who_tf_is_qasar_alpha/
| false | false |
self
| 0 | null |
selling manus codes.
| 1 |
[removed]
| 2025-04-11T12:25:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwol7i/selling_manus_codes/
|
PrestigiousEmu4485
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwol7i
| false | null |
t3_1jwol7i
|
/r/LocalLLaMA/comments/1jwol7i/selling_manus_codes/
| false | false |
self
| 1 | null |
Selling manus codes
| 0 |
Dm me, codes sell for 2 euro per code.
| 2025-04-11T12:29:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwonm1/selling_manus_codes/
|
AccomplishedPay7646
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwonm1
| false | null |
t3_1jwonm1
|
/r/LocalLLaMA/comments/1jwonm1/selling_manus_codes/
| false | false |
self
| 0 | null |
Deconstructing agentic AI prompts: some patterns I noticed
| 52 |
Spending some time digging into the system prompts behind agents like v0, Manus, ChatGPT 4o, (...).
It's pretty interesting seeing the common threads emerge – how they define the agent's role, structure complex instructions, handle tool use (often very explicitly), encourage step-by-step planning, and bake in safety rules. Seems like a kind of 'convergent evolution' in prompt design for getting these things to actually work reliably.
Wrote up a more detailed breakdown with examples from the repo if anyone's interested in this stuff:
[awesome-ai-system-prompts](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2Fdontriskit%2Fawesome-ai-system-prompts)
Might be useful if you're building agents or just curious about the 'ghost in the machine'. Curious what patterns others are finding indispensable?
| 2025-04-11T12:34:54 |
https://v.redd.it/5g15kxiu97ue1
|
secopsml
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwormp
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5g15kxiu97ue1/DASHPlaylist.mpd?a=1746966912%2CNjQxMjA0YTliYTIxZjczZDlkMDNiNzRmNTY3MjBlODk3N2QxOWY4MTJhN2ExMmViYWZkMzAzMjY1OWVmNzY3Mg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/5g15kxiu97ue1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/5g15kxiu97ue1/HLSPlaylist.m3u8?a=1746966912%2CZGM3YzI4YmRlN2M0ZDNiNWE0ODIwM2Q1YTA1NDIwZmNmYzdmYWZkZjc4N2Q0OTMzYmJlYjBkMDczZGRkOWE3Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5g15kxiu97ue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 994}}
|
t3_1jwormp
|
/r/LocalLLaMA/comments/1jwormp/deconstructing_agentic_ai_prompts_some_patterns_i/
| false | false | 52 |
{'enabled': False, 'images': [{'id': 'azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=108&crop=smart&format=pjpg&auto=webp&s=e9af721d423b39ef43e2a217038741a2523f1b02', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=216&crop=smart&format=pjpg&auto=webp&s=76d457ff53cbb2202ec21b9172e2798487014443', 'width': 216}, {'height': 231, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=320&crop=smart&format=pjpg&auto=webp&s=1d7df555ef03aa90ce656cb40af56ec5f87d4dd2', 'width': 320}, {'height': 463, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=640&crop=smart&format=pjpg&auto=webp&s=521e390d2049d6aa86fb5abd0823d771bf7ea02a', 'width': 640}, {'height': 695, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=960&crop=smart&format=pjpg&auto=webp&s=52ccaf64273f0ee803a9f685e5a3b8d7e2b8f3ce', 'width': 960}, {'height': 782, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7f3306a46d040cc578daa24df7abaf8a4b2b0546', 'width': 1080}], 'source': {'height': 966, 'url': 'https://external-preview.redd.it/azdsOGd4aXU5N3VlMS-DOok8VecI4VBh-SaZNHm4Aspcxmsyk9I5WC2oHNIS.png?format=pjpg&auto=webp&s=dd0427dd01def76cfdf2ebc6dbb6f71140963dbe', 'width': 1334}, 'variants': {}}]}
|
|
New quantization type: HIGGS
| 1 |
[removed]
| 2025-04-11T12:40:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwov5l/new_quantization_type_higgs/
|
Alex_L1nk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwov5l
| false | null |
t3_1jwov5l
|
/r/LocalLLaMA/comments/1jwov5l/new_quantization_type_higgs/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]}
|
Volunteer or Intern as Machine Learning Engineer / AI Engineer
| 1 |
[removed]
| 2025-04-11T13:30:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwpxcc/volunteer_or_intern_as_machine_learning_engineer/
|
PlaySecure6279
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwpxcc
| false | null |
t3_1jwpxcc
|
/r/LocalLLaMA/comments/1jwpxcc/volunteer_or_intern_as_machine_learning_engineer/
| false | false |
self
| 1 | null |
I Started awesome-a2a for Google's Agent2Agent Protocol - Hoping to Build It with Community Help!
| 1 |
[removed]
| 2025-04-11T13:38:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwq3x6/i_started_awesomea2a_for_googles_agent2agent/
|
gpt-0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwq3x6
| false | null |
t3_1jwq3x6
|
/r/LocalLLaMA/comments/1jwq3x6/i_started_awesomea2a_for_googles_agent2agent/
| false | false | 1 | null |
|
Newbie question: can there be loras for tts?
| 4 |
Hi all. I'm not a coder yet nor do I know the vagueries of how lora works, i.e: if it can only apply to language models or transformers in general. Can you help answer this question? If I hypothetically have the knowledge, can I make a lora for a specific voice or language? Or is that not how it works and I'm just doing the equivalent of saying can I eat fire? Thanks in advance.
| 2025-04-11T13:42:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwq6k2/newbie_question_can_there_be_loras_for_tts/
|
Silver-Champion-4846
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwq6k2
| false | null |
t3_1jwq6k2
|
/r/LocalLLaMA/comments/1jwq6k2/newbie_question_can_there_be_loras_for_tts/
| false | false |
self
| 4 | null |
Would you pay more for smarter AI?
| 1 |
[removed]
| 2025-04-11T13:58:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwqjil/would_you_pay_more_for_smarter_ai/
|
Low_Blackberry_9402
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwqjil
| false | null |
t3_1jwqjil
|
/r/LocalLLaMA/comments/1jwqjil/would_you_pay_more_for_smarter_ai/
| false | false |
self
| 1 | null |
Looking for UI for ollama models , that can execute code and retrieve files from my local machine
| 1 |
[removed]
| 2025-04-11T14:00:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwqlij/looking_for_ui_for_ollama_models_that_can_execute/
|
Natural-Parsley-7769
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwqlij
| false | null |
t3_1jwqlij
|
/r/LocalLLaMA/comments/1jwqlij/looking_for_ui_for_ollama_models_that_can_execute/
| false | false |
self
| 1 | null |
Gemini and I wrote a single HTML file to access gemini locally
| 0 |
It's just as good as using the gemini AI playground thing google has, and it's local.
Features: file uploads, copy code block to clipboard, chat history, model selection, and more
[https://github.com/openconstruct/geminihtml](https://github.com/openconstruct/geminihtml)
| 2025-04-11T14:08:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwqryi/gemini_and_i_wrote_a_single_html_file_to_access/
|
thebadslime
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwqryi
| false | null |
t3_1jwqryi
|
/r/LocalLLaMA/comments/1jwqryi/gemini_and_i_wrote_a_single_html_file_to_access/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'bMqeqopXr0zBDxV2Ez4PT_nbouQmxrCDVRWu7if3rO0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=108&crop=smart&auto=webp&s=b1613d489f5d8b12dcc12de537fca63ef33e441e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=216&crop=smart&auto=webp&s=1ab3600f8f7537842b7c1ba516c27777d170ec58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=320&crop=smart&auto=webp&s=0f593069bae38a5a1bf0816770e1155e2d8346c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=640&crop=smart&auto=webp&s=13461d87d213b6999fe70353037d4ef03bc5abcf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=960&crop=smart&auto=webp&s=c413772fa4838b96370f5863a26f2e2a3261d8ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?width=1080&crop=smart&auto=webp&s=5cc7773223ae3b7cd095fb371d0745f269fa0ea6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0hiYRcHCUwDpDR3inp07_ujloSpw-XBn_oWfRvQfOpo.jpg?auto=webp&s=7613e2d82d03ac7f31f7cbcd9b31db9d704b4564', 'width': 1200}, 'variants': {}}]}
|
Looking for Ollama-Based UI to Execute LLM-Generated Code and Analyze Local CSV Files
| 1 |
[removed]
| 2025-04-11T14:10:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jwqtom/looking_for_ollamabased_ui_to_execute/
|
BatLevel3320
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jwqtom
| false | null |
t3_1jwqtom
|
/r/LocalLLaMA/comments/1jwqtom/looking_for_ollamabased_ui_to_execute/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.