title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Advice Needed: Building a 4x RTX 3090 AI Homelab Server – MZ32-AR0 or Switch?
| 1 |
[removed]
| 2025-04-15T10:28:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzogae/advice_needed_building_a_4x_rtx_3090_ai_homelab/
|
mianasifaly
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzogae
| false | null |
t3_1jzogae
|
/r/LocalLLaMA/comments/1jzogae/advice_needed_building_a_4x_rtx_3090_ai_homelab/
| false | false |
self
| 1 | null |
Microsoft has released a fresh 2B bitnet model
| 448 |
>**BitNet b1.58 2B4T**, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale, developed by Microsoft Research.
>Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
[HuggingFace (safetensors)](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T) [BF16 (not published yet)](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16)
[HuggingFace (GGUF)](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-gguf)
[Github](https://github.com/microsoft/BitNet)
| 2025-04-15T11:10:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzp5or/microsoft_has_released_a_fresh_2b_bitnet_model/
|
remixer_dec
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzp5or
| false | null |
t3_1jzp5or
|
/r/LocalLLaMA/comments/1jzp5or/microsoft_has_released_a_fresh_2b_bitnet_model/
| false | false |
self
| 448 |
{'enabled': False, 'images': [{'id': 'QZSIxlGG4nRVvzmhDwx93A7GSTEByHGy9t7mCXvTaF4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=108&crop=smart&auto=webp&s=0238460ec071c96e8db41fd174c16ecb21c5db25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=216&crop=smart&auto=webp&s=c71acb89d83a011c1e23fb29daef9ec2f78f727c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=320&crop=smart&auto=webp&s=5df6bf6123497123d6ebfaf85eedfec0d1f7fb8d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=640&crop=smart&auto=webp&s=cbf64279a949dc08ffd8224962b3506e28b1c66f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=960&crop=smart&auto=webp&s=45f6ab53bc551476c55af48791bb9812e692404a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?width=1080&crop=smart&auto=webp&s=5bafcdd1dc1fed6018d4a2ac13f500b304dd0c85', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zHyZ4PIVBXSbgnkl6LIxi4dUhzGANFn4DCygPicHeYQ.jpg?auto=webp&s=1a0ddb1c60f623e3a08e29d1fc98b2bc074695e7', 'width': 1200}, 'variants': {}}]}
|
Finetuning a small model for a specific programming language
| 1 |
[removed]
| 2025-04-15T11:24:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzpenc/finetuning_a_small_model_for_a_specific/
|
Existing-Ad8067
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzpenc
| false | null |
t3_1jzpenc
|
/r/LocalLLaMA/comments/1jzpenc/finetuning_a_small_model_for_a_specific/
| false | false |
self
| 1 | null |
It's been a while since Cohere shipped something.
| 1 |
[removed]
| 2025-04-15T11:27:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzpgf0/its_been_a_while_since_cohere_shipped_something/
|
Dark_Fire_12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzpgf0
| false | null |
t3_1jzpgf0
|
/r/LocalLLaMA/comments/1jzpgf0/its_been_a_while_since_cohere_shipped_something/
| false | false |
self
| 1 | null |
It's been a while since Cohere shipped something.
| 1 |
[removed]
| 2025-04-15T11:28:14 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzpgw5
| false | null |
t3_1jzpgw5
|
/r/LocalLLaMA/comments/1jzpgw5/its_been_a_while_since_cohere_shipped_something/
| false | false |
default
| 1 | null |
||
Any draft model that works (well?) with the March release of QwQ-32B?
| 13 |
Hi all,
I'm trying to run the March release of QwQ-32B using llama.cpp, but struggling to find a compatible draft model. I have tried several GGUFs from HF, and keep getting the following error:
the draft model 'xxxxxxxxxx.gguf' is not compatible with the target model '/models/QwQ-32B.Q8_0.gguf'
For reference, I'm using unsloth/QwQ-32B-GGUF.
This is how I'm running llama.cpp (dual E5-2699v4, 44 physical cores):
llama-server -m /models/QwQ-32B.Q8_0.gguf
-md /models/qwen2.5-1.5b-instruct-q8_0.gguf
--sampling-seq k --top-k 1 -fa --temp 0.0 -sm row --no-mmap
-ngl 99 -ngld 99 --port 9005 -c 50000
--draft-max 16 --draft-min 5 --draft-p-min 0.5
--override-kv tokenizer.ggml.add_bos_token=bool:false
--cache-type-k q8_0 --cache-type-v q8_0
--device CUDA2,CUDA3 --device-draft CUDA3 --tensor-split 0,0,1,1
--slots --metrics --numa distribute -t 40 --no-warmup
I have tried 5 different Qwen2.5-1.5B-Instruct models all without success.
| 2025-04-15T11:38:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzpniw/any_draft_model_that_works_well_with_the_march/
|
FullstackSensei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzpniw
| false | null |
t3_1jzpniw
|
/r/LocalLLaMA/comments/1jzpniw/any_draft_model_that_works_well_with_the_march/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
|
Roast My AI Server Setup
| 1 |
[removed]
| 2025-04-15T12:17:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzqegs/roast_my_ai_server_setup/
|
PepperNoMo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzqegs
| false | null |
t3_1jzqegs
|
/r/LocalLLaMA/comments/1jzqegs/roast_my_ai_server_setup/
| false | false |
self
| 1 | null |
Working with multiple projects in Cursor AI – current best practices?
| 0 |
Hi everyone,
I’ve been using Cursor AI for a few months now and I’m curious how others are managing multiple projects within the same workspace. My use case involves building and maintaining mobile apps (iOS and soon Android), and I often work on different codebases in parallel.
A few months ago, I noticed that the best way to avoid confusion was to:
* Load only one project into the workspace at a time
* Use a separate chat tab/agent for each subproblem
* Clear the workspace before loading another project
The main issue back then was that Cursor sometimes mixed up file paths or edited the wrong parts of the code when multiple projects were present.
Since there have been multiple updates recently, I’d like to know:
* Has multi-project handling improved?
* Can Cursor now handle multiple projects simultaneously in a stable way?
* Do you have a clean workflow for jumping between codebases without confusing the AI agent?
Appreciate any shared experiences or updated best practices!
| 2025-04-15T12:38:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzqtsq/working_with_multiple_projects_in_cursor_ai/
|
Creepy_Virus231
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzqtsq
| false | null |
t3_1jzqtsq
|
/r/LocalLLaMA/comments/1jzqtsq/working_with_multiple_projects_in_cursor_ai/
| false | false |
self
| 0 | null |
ChatML interface is not suitable for agentic systems?
| 1 |
[removed]
| 2025-04-15T12:39:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzquec/chatml_interface_is_not_suitable_for_agentic/
|
Grigorij_127
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzquec
| false | null |
t3_1jzquec
|
/r/LocalLLaMA/comments/1jzquec/chatml_interface_is_not_suitable_for_agentic/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'tG3g3tI-bBUIbkQUa_ONjkG_S013dETXLQSyKsdfIRE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=108&crop=smart&auto=webp&s=b134eea29ee20c4203463956c0cf78f9d19d49ca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=216&crop=smart&auto=webp&s=06ace43f4eb3e2ec0c7d6b08daa4adbd6446053c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=320&crop=smart&auto=webp&s=b23054fb58b02299c3cb7236e197b9e9cfc5dd7b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=640&crop=smart&auto=webp&s=4f0add883255d7619831a80331e3bb2559d12ece', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=960&crop=smart&auto=webp&s=d6efb9901f56a8f5ca51a1aa7a81ea23da9038b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?width=1080&crop=smart&auto=webp&s=508802575a6e86d877b288ae673feffecce1bf50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b978DOl5phNzku69QzBum1GYDmMIol3vVcmFoS-YJ4g.jpg?auto=webp&s=894d80e9d9d21eb01932deeec836d2ba0918305b', 'width': 1200}, 'variants': {}}]}
|
Handling Dynamic File Uploads in LlamaIndex RAG Pipeline
| 1 |
[removed]
| 2025-04-15T12:46:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzr03w/handling_dynamic_file_uploads_in_llamaindex_rag/
|
Imaginary-File-453
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzr03w
| false | null |
t3_1jzr03w
|
/r/LocalLLaMA/comments/1jzr03w/handling_dynamic_file_uploads_in_llamaindex_rag/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'fmoJIYAvxS16--jtGWDBxZ2Ej7jd8BjK5V1Yevc6sBk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=108&crop=smart&auto=webp&s=f151efaedca91f7f2bb2a833c8935d9bcd2f860c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=216&crop=smart&auto=webp&s=b2b6cf16f06e8cdc9cf633a11842eb79af505cc3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=320&crop=smart&auto=webp&s=45653d1ddb2a48ecab9380d62ccf58bb36f3568b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=640&crop=smart&auto=webp&s=b77344f9e6b032733279f083a39a58fb17c592b3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=960&crop=smart&auto=webp&s=ee2a89a141bf6b438fa15aaeefffcff283fbdedf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?width=1080&crop=smart&auto=webp&s=3bb4e211396e378fd121b2db270b73ecd0081d41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NE1jaMjifVBgbbeKw2zSnbPOwwEYAUdJ8PRqA8NNT9k.jpg?auto=webp&s=1f49d1c29dffb1fabd6ee62b38ea5739fcfbe4a5', 'width': 1200}, 'variants': {}}]}
|
Mistral Libraries!
| 63 |
Current support for PDF, DOCX, PPTX, CSV, TXT, MD, XLSX
Up to 100 files, 100MB per file
Waiting on the official announcement...
| 2025-04-15T13:02:30 |
SufficientRadio
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzrc5r
| false | null |
t3_1jzrc5r
|
/r/LocalLLaMA/comments/1jzrc5r/mistral_libraries/
| false | false | 63 |
{'enabled': True, 'images': [{'id': 'cq9_2CoaI7LBaA9nNgOi1rdoML8supJvVytUeKN-rrg', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=108&crop=smart&auto=webp&s=f13e68344d190c9a9ea20c1db05890f44eba1b0a', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=216&crop=smart&auto=webp&s=8c33daff2addf764da38d057d499a8e455d19dc8', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=320&crop=smart&auto=webp&s=14feace8394eec7ee2e994c871c16682ce057e85', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=640&crop=smart&auto=webp&s=2c05f90fdbf3aee13a62dcd1f96854260c11d304', 'width': 640}, {'height': 408, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=960&crop=smart&auto=webp&s=e0d45c940a1856c9dbbf809ac5e5eb924405c1c1', 'width': 960}, {'height': 459, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?width=1080&crop=smart&auto=webp&s=8c84a1e372e5b26e489d5a8ce7eb2f43525d0880', 'width': 1080}], 'source': {'height': 714, 'url': 'https://preview.redd.it/r7ae07pgxzue1.png?auto=webp&s=c46bb3c303a32f47dbd901453d4853efaa81d293', 'width': 1680}, 'variants': {}}]}
|
||
How much does CPU matter in a CPU-only setup?
| 0 |
Hi. I hope the title does not look very weird!
I'm looking to buy a small server for (almost) sole purpose of serving an LLM API from it. It will not have a GPU, and I'm aiming/hoping for a speed of 10 to 15 tokens per second.
Now, to me it is obvious that RAM is the more important factor here: If you cannot fit a model in the RAM, it's fully off the table. Then there is the RAM speed of course, DDR4 vs. DDR5 and above etc.
But what roles does the CPU play here? Does it significantly affect the performance (i.e. tps) for a fixed RAM amount and throughput?
More concretely, I have seen an interesting offer for a server with 64GB of RAM, but only a Core i3 processor. In theory, such a machine should be able run e.g. 70B quantised models (or not?), but will it be practically unusable?
Should I prefer a machine with 32GB of RAM but a better cpu, e.g. Xeon? Does the number of cores (physical/virtual) matter a lot?
Currently, I run Gemma2 9B on (pretty low-end) VPS machine with 8GB of RAM and 8 cpu cores. The speed is about 12 tokens per second with which I am happy. I don't know how much those 8 cores affect performance, though.
Many thanks.
| 2025-04-15T13:31:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzrypq/how_much_does_cpu_matter_in_a_cpuonly_setup/
|
ihatebeinganonymous
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzrypq
| false | null |
t3_1jzrypq
|
/r/LocalLLaMA/comments/1jzrypq/how_much_does_cpu_matter_in_a_cpuonly_setup/
| false | false |
self
| 0 | null |
Ollama Appreciation Post
| 0 |
Since there was a separate thread arbitrarily attacking and castigating an open source project whose crime was... creating open source software in full compliance with licensing of dependencies -- just wanted to create a thread shouting out the hard work of Ollama contributors.
I do not think Ollama is perfect software by any stretch but I also don't think any software is perfect.
That said, it has revolutionized the home LLM engine scene. I'm a software developer very used to complex software but I very much enjoy Ollama because **the software gets out of your way** and makes your projects and applications the focus, not your LLM runner.
Just wanted to send a note of appreciation out to the cosmos.
Remember friends, starting a targeted bullying campaign against people who are creating free and open source software is inherently wrong, and we should strive to be better.
| 2025-04-15T13:50:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzsdr8/ollama_appreciation_post/
|
BumbleSlob
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzsdr8
| false | null |
t3_1jzsdr8
|
/r/LocalLLaMA/comments/1jzsdr8/ollama_appreciation_post/
| false | false |
self
| 0 | null |
Nvidia releases ultralong-8b model with context lengths from 1, 2 or 4mil
| 183 | 2025-04-15T14:04:10 |
https://arxiv.org/abs/2504.06214
|
throwawayacc201711
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzsp5r
| false | null |
t3_1jzsp5r
|
/r/LocalLLaMA/comments/1jzsp5r/nvidia_releases_ultralong8b_model_with_context/
| false | false |
default
| 183 | null |
|
Overhyped Claude 2.7 and Gemini 2.5
| 1 |
[removed]
| 2025-04-15T14:09:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzst76/overhyped_claude_27_and_gemini_25/
|
TheKotleta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzst76
| false | null |
t3_1jzst76
|
/r/LocalLLaMA/comments/1jzst76/overhyped_claude_27_and_gemini_25/
| false | false |
self
| 1 | null |
Is MCP getting overlooked?
| 0 |
What's going on? Am I the only one who thinks MCP's capabilities are getting overlooked too much? I know a lot of people are diving in MCP in this moment, but I feel like it didn't make a really big echo, despite being (I think), close to revolutionary.
Am I missing or misinterpreting something? What do you think about it?Is MCP getting overlooked at?
| 2025-04-15T14:18:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzt115/is_mcp_getting_overlooked/
|
Foreign_Lead_3582
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzt115
| false | null |
t3_1jzt115
|
/r/LocalLLaMA/comments/1jzt115/is_mcp_getting_overlooked/
| false | false |
self
| 0 | null |
can this laptop run local AI models well ?
| 0 |
laptop is
Dell Precision 7550
specs
Intel Core i7-10875H
NVIDIA Quadro RTX 5000 16GB vram
32GB RAM, 512GB
can it run local ai models well such as deepseek ?
| 2025-04-15T14:31:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jztd3r/can_this_laptop_run_local_ai_models_well/
|
Askmasr_mod
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jztd3r
| false | null |
t3_1jztd3r
|
/r/LocalLLaMA/comments/1jztd3r/can_this_laptop_run_local_ai_models_well/
| false | false |
self
| 0 | null |
I created an app that allows you use OpenAI API without API Key (Through desktop app)
| 140 |
https://i.redd.it/rh6ghzpkn0ve1.gif
I created an open source mac app that mocks the usage of OpenAI API by routing the messages to the chatgpt desktop app so it can be used without API key.
I made it for personal reason but I think it may benefit you. I know the purpose of the app and the API is very different but I was using it just for personal stuff and automations.
You can simply change the api base (like if u are using ollama) and select any of the models that you can access from chatgpt app
\`\`\`python
from openai import OpenAI
client = OpenAI(api_key=OPENAI_API_KEY, base_url = 'http://127.0.0.1:11435/v1')
completion = client.chat.completions.create(
model="gpt-4o-2024-05-13",
messages=[
{"role": "user", "content": "How many r's in the word strawberry?"},
]
)
print(completion.choices[0].message)
```
[GitHub Link](https://github.com/0ssamaak0/MackingJAI)
It's only available as dmg now but I will try to do a brew package soon.
| 2025-04-15T15:27:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzuqpq/i_created_an_app_that_allows_you_use_openai_api/
|
0ssamaak0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzuqpq
| false | null |
t3_1jzuqpq
|
/r/LocalLLaMA/comments/1jzuqpq/i_created_an_app_that_allows_you_use_openai_api/
| false | false | 140 | null |
|
Local personal LLM for Macbook air M4
| 1 |
[removed]
| 2025-04-15T15:44:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzv62x/local_personal_llm_for_macbook_air_m4/
|
Aggravating-Grade158
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzv62x
| false | null |
t3_1jzv62x
|
/r/LocalLLaMA/comments/1jzv62x/local_personal_llm_for_macbook_air_m4/
| false | false |
self
| 1 | null |
When will Qwen 3 be released? Glm 4 0414?
| 1 |
[removed]
| 2025-04-15T16:03:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzvn4q/when_will_qwen_3_be_released_glm_4_0414/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzvn4q
| false | null |
t3_1jzvn4q
|
/r/LocalLLaMA/comments/1jzvn4q/when_will_qwen_3_be_released_glm_4_0414/
| false | false |
self
| 1 | null |
Experience with V100 sxm2 with PCI adapter
| 3 |
I'm thinking about selling my single 4090 and getting two v100's sxm2's, 32GB and to install them with PCIe adapters (I don't have a server board).
Is there anyone who has done this and can share their experience ?
| 2025-04-15T16:10:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzvti3/experience_with_v100_sxm2_with_pci_adapter/
|
swiss_aspie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzvti3
| false | null |
t3_1jzvti3
|
/r/LocalLLaMA/comments/1jzvti3/experience_with_v100_sxm2_with_pci_adapter/
| false | false |
self
| 3 | null |
When will Qwen 3 be released? GLM 4 0414?
| 1 |
[removed]
| 2025-04-15T16:14:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzvxco/when_will_qwen_3_be_released_glm_4_0414/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzvxco
| false | null |
t3_1jzvxco
|
/r/LocalLLaMA/comments/1jzvxco/when_will_qwen_3_be_released_glm_4_0414/
| false | false |
self
| 1 | null |
Is there a local AI that can do image to video?
| 1 |
[removed]
| 2025-04-15T16:16:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzvyks/is_there_a_local_ai_that_can_do_image_to_video/
|
christian7670
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzvyks
| false | null |
t3_1jzvyks
|
/r/LocalLLaMA/comments/1jzvyks/is_there_a_local_ai_that_can_do_image_to_video/
| false | false |
self
| 1 | null |
Which is the best ai model right now for social media writing?
| 0 |
There are so many models that I'm confused,, plz help!
| 2025-04-15T16:20:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzw28p/which_is_the_best_ai_model_right_now_for_social/
|
No_Macaroon_7608
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzw28p
| false | null |
t3_1jzw28p
|
/r/LocalLLaMA/comments/1jzw28p/which_is_the_best_ai_model_right_now_for_social/
| false | false |
self
| 0 | null |
What is the difference between token counting with Sentence Transformers and using AutoTokenizer for embedding models?
| 1 |
Hey guys!
I'm working with on chunking some documents and since I don't have any flexibility when it comes to the embedding model to use, I needed to adapt my chunking strategy based on the max token size of the embedding model.
To do this I need to count the tokens in the text. I noticed that there seem to be two common approaches for counting tokens: one using methods provided by Sentence Transformers and the other using the model’s own tokenizer via Hugging Face's AutoTokenizer.
Could someone explain the differences between these two methods? Will I get different results or the same results.
Any insights on this would be really helpful!
| 2025-04-15T16:23:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzw57n/what_is_the_difference_between_token_counting/
|
Parking_Marzipan_693
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzw57n
| false | null |
t3_1jzw57n
|
/r/LocalLLaMA/comments/1jzw57n/what_is_the_difference_between_token_counting/
| false | false |
self
| 1 | null |
Unlearning Alignment
| 1 |
[removed]
| 2025-04-15T16:32:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzwczs/unlearning_alignment/
|
Fuzzy-Attitude-6183
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzwczs
| false | null |
t3_1jzwczs
|
/r/LocalLLaMA/comments/1jzwczs/unlearning_alignment/
| false | false |
self
| 1 | null |
An extensive open-source collection of RAG implementations with many different strategies
| 98 |
Hi all,
Sharing a repo I was working on and apparently people found it helpful (over 14,000 stars).
It’s open-source and includes 33 strategies for RAG, including tutorials, and visualizations.
This is great learning and reference material.
Open issues, suggest more strategies, and use as needed.
Enjoy!
[https://github.com/NirDiamant/RAG\_Techniques](https://github.com/NirDiamant/RAG_Techniques)
| 2025-04-15T16:45:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzwoci/an_extensive_opensource_collection_of_rag/
|
Nir777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzwoci
| false | null |
t3_1jzwoci
|
/r/LocalLLaMA/comments/1jzwoci/an_extensive_opensource_collection_of_rag/
| false | false |
self
| 98 | null |
Help Needed
| 1 |
Hello,
I am tuning Qwen2.5-7B-Instruct-bnb-4bit for a classification task with LoRA. i have around 3k training data. While making prediction on the test data after tuning, its generating gibberish characters. approximately 4 out of 10 times. Any idea how to deal with that?
these are the peft config and training arguments.
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 16,
max_grad_norm=0.3,
num_train_epochs = 3,
warmup_steps = 5,
# num_train_epochs = 1, # Set this for 1 full training run.
#max_steps = 60,
learning_rate = 2e-4,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 5,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "twi-qwen-ft",
# report_to = "none", # Use this for WandB etc
)
| 2025-04-15T16:50:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzwsxk/help_needed/
|
prod-v03zz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzwsxk
| false | null |
t3_1jzwsxk
|
/r/LocalLLaMA/comments/1jzwsxk/help_needed/
| false | false |
self
| 1 | null |
TinyLlama is too verbose, looking for concise LLM alternatives for iOS (MLXLLM)
| 1 |
Hey folks! I'm new to LocalLLaMAs and just integrated `TinyLlama-1.1B-Chat-v1.0-4bit` into my iOS app using the MLXLLM Swift framework. It works, but it's way too verbose. I just want short, effective responses that stop when the question is answered.
I previously tried Gemma, but it kept generating random Cyrillic characters, so I dropped it.
Any tips on making TinyLlama more concise? Or suggestions for alternative models that work well with iPhone-level memory (e.g. iPhone 12 Pro)?
Thanks in advance!
| 2025-04-15T16:52:04 |
adonztevez
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzwudz
| false | null |
t3_1jzwudz
|
/r/LocalLLaMA/comments/1jzwudz/tinyllama_is_too_verbose_looking_for_concise_llm/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'npouWR98R_DvwmagoJYQ1bNkTZlIz6pT7G0fAQFrnlA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=108&crop=smart&auto=webp&s=fac30fdb39b7831d81c7d49318e7b6c5e8c500d8', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=216&crop=smart&auto=webp&s=07129767b38ec27c64ab015e86e7487cd167b056', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=320&crop=smart&auto=webp&s=719421e03a8581e24ca3acf4827bbf91c43df8c8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=640&crop=smart&auto=webp&s=d3c23c000e6edcaaab7adbf22561ca00e6a01578', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=960&crop=smart&auto=webp&s=ccb080557b8dc6d0e085ab01d144dffd7a9dec27', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?width=1080&crop=smart&auto=webp&s=a41e2d38b2138e494e5628772097c7c94358619b', 'width': 1080}], 'source': {'height': 2778, 'url': 'https://preview.redd.it/njgjkolm31ve1.jpeg?auto=webp&s=662936c3d2cc6b4266870cee8bd6fd7aeb1bd38c', 'width': 1284}, 'variants': {}}]}
|
||
From Thought to Action: Exploring Tool Call for Local AI Autonomy on mobile
| 1 |
Hello everyone,
I'm the developer of d.ai, an offline AI assistant for Android that runs language models locally—Gemma, Mistral, Phi, LLaMA, and now Hugging Face GGUFs via llama.cpp.
I'm currently working on a feature called Tool Call. The idea is to enable local models to execute predefined tools or functions on the device—bridging the gap between reasoning and action, entirely offline.
This could include simple utilities like reading files, setting reminders, or launching apps. But it could also extend into more creative or complex use cases: generating content for games, managing media, triggering simulations, or interacting with other apps.
My goal is to keep the system lightweight, private, and flexible—but open enough for diverse experimentation.
What kinds of tools or interactions would you find meaningful or fun to enable through a local AI on your phone?
I’m especially interested in use cases beyond productivity—gaming, storytelling, custom workflows… anything that comes to mind.
Open to suggestions and directions. Thanks for reading.
| 2025-04-15T17:15:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzxf3h/from_thought_to_action_exploring_tool_call_for/
|
dai_app
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzxf3h
| false | null |
t3_1jzxf3h
|
/r/LocalLLaMA/comments/1jzxf3h/from_thought_to_action_exploring_tool_call_for/
| false | false |
self
| 1 | null |
Ragie on “RAG is Dead”: What the Critics Are Getting Wrong… Again
| 59 |
Hey all,
With the release of Llama 4 Scout and its 10 million token context window, the “RAG is dead” critics have started up again, but I think they're missing the point.
RAG isn’t dead... long context windows enable exciting new possibilities but they complement RAG rather than replace it. I went deep and wrote a blog post the latency, cost and accuracy tradeoffs of stuffing tokens in context vs using RAG because I've been getting questions from friends and colleagues about the subject.
I would love to get your thoughts.
[https://www.ragie.ai/blog/ragie-on-rag-is-dead-what-the-critics-are-getting-wrong-again](https://www.ragie.ai/blog/ragie-on-rag-is-dead-what-the-critics-are-getting-wrong-again)
| 2025-04-15T17:27:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzxpzx/ragie_on_rag_is_dead_what_the_critics_are_getting/
|
bob_at_ragie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzxpzx
| false | null |
t3_1jzxpzx
|
/r/LocalLLaMA/comments/1jzxpzx/ragie_on_rag_is_dead_what_the_critics_are_getting/
| false | false |
self
| 59 |
{'enabled': False, 'images': [{'id': 'lJuwuZS0Gr8TYvPMAAHQjSgmECrKAyKqnyIDqkMu8v0', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=108&crop=smart&auto=webp&s=1f4e0c3d6ce49b03131e1dfbd0fcedc96a479441', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=216&crop=smart&auto=webp&s=503a7f3c87fc60cb94e2e654981cb486138e3c2c', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=320&crop=smart&auto=webp&s=779eb53543a2f30b2880f65dc27e4b84dea7db62', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=640&crop=smart&auto=webp&s=112a0963ff0f7ca26c42bb72fa7eea24ccba1713', 'width': 640}, {'height': 507, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=960&crop=smart&auto=webp&s=eba7fd0eff94dbb8c6fa62e279dc981448f75612', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?width=1080&crop=smart&auto=webp&s=4dccf11189f257a75078b50ed2e200cc3bf7945a', 'width': 1080}], 'source': {'height': 1268, 'url': 'https://external-preview.redd.it/qcxUiR7hmfW0fTVBwiZyy2meeRlTviXjMnGpLLA9XBg.jpg?auto=webp&s=f567e7d5c28aa382e52406fb8e491a573f717e7a', 'width': 2400}, 'variants': {}}]}
|
How to run LLaMA 3.2 1B or 3B on the Neural Engine (Mac Mini M4 and iPhone 12 Pro)? Beginner in AI
| 0 |
Hi everyone!
I’m a beginner in AI but really interested in running LLaMA models locally (especially offline use). I’d like to know if it’s possible — and how — to run **LLaMA 3.2 (1B or 3B)** using **Apple’s Neural Engine (ANE)** on the following devices:
• My **Mac Mini M4**
• My **iPhone 12 Pro**
**What I want:**
• To take full advantage of the **Neural Engine**, not just CPU/GPU.
• Have fast and smooth response times for simple local chatbot/personal assistant use.
• Stay **offline**, no cloud APIs.
I’ve heard of tools like **llama.cpp**, **MLX**, **MPS**, and **CoreML**, but I’m not sure which ones really use the Neural Engine — and which are beginner-friendly.
**My questions:**
1. Is there a **LLaMA 3.2 1B or 3B model** available or convertible to **CoreML** that can run on the ANE?
2. Are there any up-to-date guides/tutorials to set this up **locally with Apple hardware acceleration**?
Thanks a lot in advance to anyone who takes the time to help! 🙏
| 2025-04-15T17:39:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzy18b/how_to_run_llama_32_1b_or_3b_on_the_neural_engine/
|
Valtra_Power
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzy18b
| false | null |
t3_1jzy18b
|
/r/LocalLLaMA/comments/1jzy18b/how_to_run_llama_32_1b_or_3b_on_the_neural_engine/
| false | false |
self
| 0 | null |
Apple Silicon Docker LLMs
| 1 |
Docker for Mac version 4.40 adds the "model" command to the docker cli "docker model pull/ls/run" with the ollama engine; but as far as i can tell, tensor hardware acceleration is - so far - only with Windows and not Apple Silicon... or am I missing something in the release notes ?
http://docs.docker.com/desktop/features/model-runner/ under GPU support it says only WIN+NVIDIA.
available models to "docker model pull " are browse-able here: https://hub.docker.com/u/ai
| 2025-04-15T17:41:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzy32i/apple_silicon_docker_llms/
|
neurostream
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzy32i
| false | null |
t3_1jzy32i
|
/r/LocalLLaMA/comments/1jzy32i/apple_silicon_docker_llms/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'n9qmXbV-BE3CNHd71ZHSvCgyTd6lGWlccyCYMZP6RsE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=108&crop=smart&auto=webp&s=4d45a24d4c645e4bc1cede3072059a520f570293', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=216&crop=smart&auto=webp&s=170d2d51640aaad766f746848dcbb57f9d52b744', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=320&crop=smart&auto=webp&s=a4acb7593b8be3528df2a34cc46f80e5925a3dc0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=640&crop=smart&auto=webp&s=5f409d1e185b200f9f66667f94270c1c343bd233', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=960&crop=smart&auto=webp&s=7f294e93bc4919d082ecbd46f957bb0f4311ed91', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?width=1080&crop=smart&auto=webp&s=2b89850dae692939e17df1a3c97a3152f77a1466', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/jl2YO8zzZy_b6y8qABno1HxjXFe0HVY7BoM22us5XT0.jpg?auto=webp&s=cc36557ba9919848cbbf3fb3a25503c84299dd91', 'width': 2400}, 'variants': {}}]}
|
Mistral Nemo vs Gemma3 12b q4 for office/productivity
| 17 |
What's the best model for productivity? As an office assistant, replying emails, and so on, in your opinion?
| 2025-04-15T17:51:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzybot/mistral_nemo_vs_gemma3_12b_q4_for/
|
No-Report-1805
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzybot
| false | null |
t3_1jzybot
|
/r/LocalLLaMA/comments/1jzybot/mistral_nemo_vs_gemma3_12b_q4_for/
| false | false |
self
| 17 | null |
VL-Rethinker, Open Weight SOTA 72B VLM that surpasses o1
| 42 | 2025-04-15T17:54:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzyeak/vlrethinker_open_weight_sota_72b_vlm_that/
|
TKGaming_11
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzyeak
| false | null |
t3_1jzyeak
|
/r/LocalLLaMA/comments/1jzyeak/vlrethinker_open_weight_sota_72b_vlm_that/
| false | false | 42 | null |
||
Nvidia 5060 Ti 16 GB VRAM for $429. Yay or nay?
| 208 |
"These new graphics cards are based on Nvidia's GB206 die. Both RTX 5060 Ti configurations use the same core, with the only difference being memory capacity. There are 4,608 CUDA cores – up 6% from the 4,352 cores in the RTX 4060 Ti – with a boost clock of 2.57 GHz. They feature a **128-bit memory bus** utilizing 28 Gbps GDDR7 memory, which should deliver **448 GB/s of bandwidth**, regardless of whether you choose the 16GB or 8GB version.
Nvidia didn't confirm this directly, but we expect a PCIe 5.0 x8 interface. They did, however, confirm full DisplayPort 2.1b UHBR20 support." [TechSpot](https://www.techspot.com/news/107541-nvidia-launches-geforce-rtx-5060-series-three-new.html)
Assuming these will be supply constrained / tariffed, I'm guesstimating +20% MSRP for actual street price so it might be closer to $530-ish.
Does anybody have good expectations for this product in homelab AI versus a Mac Mini/Studio or any AMD 7000/8000 GPU considering VRAM size or token/s per price?
| 2025-04-15T18:21:12 |
Amadesa1
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzz2q6
| false | null |
t3_1jzz2q6
|
/r/LocalLLaMA/comments/1jzz2q6/nvidia_5060_ti_16_gb_vram_for_429_yay_or_nay/
| false | false | 208 |
{'enabled': True, 'images': [{'id': '2vBPKRrzMy5ndJGhZA7-EqaKKeeAIkx1TgKJiMT9QiY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=108&crop=smart&auto=webp&s=19083ce752fa37d5d4cd2d2e40f9d364ac4fe635', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=216&crop=smart&auto=webp&s=4772909642ed3e4c9404343e458b880545a2c82e', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=320&crop=smart&auto=webp&s=353d44cab09f97f107b5b7024fd222350ac232ec', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=640&crop=smart&auto=webp&s=45d17691e0d37894b83cc3105089d7bcbe4f7f56', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=960&crop=smart&auto=webp&s=cb536b925a954be934563fe9e3ac20acaa957a84', 'width': 960}, {'height': 811, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?width=1080&crop=smart&auto=webp&s=7f203cb502b3ce0d027914a3bd9670f9a9f33bc4', 'width': 1080}], 'source': {'height': 1463, 'url': 'https://preview.redd.it/nqgok5nih1ve1.jpeg?auto=webp&s=6bffe05968e1962eaebde0eb74cf19ad4e52b8e4', 'width': 1948}, 'variants': {}}]}
|
||
How to use web search function to search specific term?
| 0 |
I’m trying to use web search on Open WebUI but the search query is not what I am looking for. How do I properly do it? I tried using this in the input but the search query still does not follow it.
Search term: keyword
Or is there a better way to force web search function to search the specific keyword that I want to search?
| 2025-04-15T18:36:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzzg7z/how_to_use_web_search_function_to_search_specific/
|
wanhanred
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzzg7z
| false | null |
t3_1jzzg7z
|
/r/LocalLLaMA/comments/1jzzg7z/how_to_use_web_search_function_to_search_specific/
| false | false |
self
| 0 | null |
Hi, I'm using gbt4all and need help picking a model?
| 1 |
[removed]
| 2025-04-15T18:44:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jzznf3/hi_im_using_gbt4all_and_need_help_picking_a_model/
|
BBC-MAN4610
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jzznf3
| false | null |
t3_1jzznf3
|
/r/LocalLLaMA/comments/1jzznf3/hi_im_using_gbt4all_and_need_help_picking_a_model/
| false | false |
self
| 1 | null |
Visual Local LLM Benchmarking
| 10 |
Visual Local LLM Benchmark: Testing JavaScript Capabilities
View the Latest Results (April 15, 2025)]
https://makeplayhappy.github.io/KoboldJSBench/results/2025.04.15/
Inspired by the popular "balls in heptagon" test making the rounds lately, I created a more visual benchmark to evaluate how local language models handle moderate JavaScript challenges.
What This Benchmark Tests
The benchmark runs four distinct visual JavaScript tests on any model you have locally:
1. Ball Bouncing Physics - Tests basic collision physics implementation
2. Simple Particle System - Evaluates handling of multiple animated elements
3. Keyboard Character Movement - Tests input handling and character control
4. Mouse-Based Turret Shooter - Assesses more complex interaction with mouse events
How It Works
The script automatically runs a set of prompts on all models in a specified folder using KoboldCPP. You can easily compare how different models perform on each test using the dropdown menu in the results page.
Try It Yourself
The entire project is essentially a single file and extremely easy to run on your own models:
GitHub Repository
https://github.com/makeplayhappy/KoboldJSBench
| 2025-04-15T19:40:17 |
https://makeplayhappy.github.io/KoboldJSBench/results/2025.04.15/
|
loadsamuny
|
makeplayhappy.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0112b
| false | null |
t3_1k0112b
|
/r/LocalLLaMA/comments/1k0112b/visual_local_llm_benchmarking/
| false | false |
default
| 10 | null |
PRIMA.CPP: Speeding Up 70B-Scale LLM Inference on Low-Resource Everyday Home Clusters
| 90 | 2025-04-15T19:43:27 |
https://huggingface.co/papers/2504.08791
|
rini17
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k013u1
| false | null |
t3_1k013u1
|
/r/LocalLLaMA/comments/1k013u1/primacpp_speeding_up_70bscale_llm_inference_on/
| false | false | 90 |
{'enabled': False, 'images': [{'id': 'XLlkDCYN58VghFtZhdSaw_uRSewkIkmw_MSJ5JhjEq8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=108&crop=smart&auto=webp&s=bdbb30511bfe579bb138f82934012e27e82601d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=216&crop=smart&auto=webp&s=3cdce0f14bd5a1b19963ed454b958b447771478c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=320&crop=smart&auto=webp&s=e5ec45125b1e6e03ca2319f522776859f36e7d41', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=640&crop=smart&auto=webp&s=47ec2eedb24b29b34a31b164d9038ca9e61a6a62', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=960&crop=smart&auto=webp&s=61582a55082be76788ea2cc27d53f38b46bdc7d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?width=1080&crop=smart&auto=webp&s=528686c93436943183466d9c320629ecb1fc1b18', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1a4VnOKBMgllCIP8oR-afgJiFL7NVrlzrJf47Hyoz_0.jpg?auto=webp&s=ff84afd4dcd83a8ef2888cfce106315a3ffd2e7f', 'width': 1200}, 'variants': {}}]}
|
||
Sorting emails with llama.cpp ?
| 1 |
[removed]
| 2025-04-15T19:55:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k01elm/sorting_emails_with_llamacpp/
|
Julie291294
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k01elm
| false | null |
t3_1k01elm
|
/r/LocalLLaMA/comments/1k01elm/sorting_emails_with_llamacpp/
| false | false |
self
| 1 | null |
Hugging Face released a hunt for the most innovative reasoning dataset
| 1 |
[removed]
| 2025-04-15T20:01:16 |
Ambitious_Anybody855
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k01jg9
| false | null |
t3_1k01jg9
|
/r/LocalLLaMA/comments/1k01jg9/hugging_face_released_a_hunt_for_the_most/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '85hGBdeirXjyQQ_6dK6rPHoynIx6ZBQqh5BPPOQL0J8', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=108&crop=smart&auto=webp&s=12f167daf03266a2db76f0c0fd79e02975b649b4', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=216&crop=smart&auto=webp&s=ba2aa7c411a4ca6162a143f241c888a1b1714be3', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=320&crop=smart&auto=webp&s=7c36ee6752a39094001f53facada497a76f72fcf', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=640&crop=smart&auto=webp&s=b1fe25e83e51116eafc662da49914ecc015c3eea', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=960&crop=smart&auto=webp&s=578fa5c1bb5709e4c6621b39fcfd7d20d502ae0b', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?width=1080&crop=smart&auto=webp&s=99133a6e397e756db11c34b0d505d398e8c31921', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/01dv6h1ry1ve1.png?auto=webp&s=9d934035d0d88d4771d42cdb004945b88b97b2a2', 'width': 1080}, 'variants': {}}]}
|
||
There is a hunt for reasoning datasets beyond math, science and coding. Much needed initiative
| 43 |
Really interested in seeing what comes out of this.
[https://huggingface.co/blog/bespokelabs/reasoning-datasets-competition](https://huggingface.co/blog/bespokelabs/reasoning-datasets-competition)
Current datasets: [https://huggingface.co/datasets?other=reasoning-datasets-competition](https://huggingface.co/datasets?other=reasoning-datasets-competition)
| 2025-04-15T20:08:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1k01pqy/there_is_a_hunt_for_reasoning_datasets_beyond/
|
Ambitious_Anybody855
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k01pqy
| false | null |
t3_1k01pqy
|
/r/LocalLLaMA/comments/1k01pqy/there_is_a_hunt_for_reasoning_datasets_beyond/
| false | false |
self
| 43 |
{'enabled': False, 'images': [{'id': 'F7HpcL25f_6izrV0johMOb5K3Gr8A0sqwGGwO_Mk0vk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=108&crop=smart&auto=webp&s=f8a2ead63ff1a47ebe129290dfa646d2e99f7855', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=216&crop=smart&auto=webp&s=adf4aa0442334b7e4adc741cf637eeecf5680bdc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=320&crop=smart&auto=webp&s=bdd10a638717b3cdafd22a1605f840d912a6e835', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=640&crop=smart&auto=webp&s=81da952dfa0da73cc5b914b9508f54346d78734e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=960&crop=smart&auto=webp&s=5b6158f79242baf3bd458a6adcc4ecb270ffbd36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?width=1080&crop=smart&auto=webp&s=cb6dba19252fe75c2de7b2a8647457d03d47fbe4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_Yhyp_KQskZdphwuX5nF_WrcqJHVu476vnqQtgIgzaY.jpg?auto=webp&s=0302df89b98aa88c88be64d06d224b45460de90b', 'width': 1200}, 'variants': {}}]}
|
[Scam or Gamechanger?] This company called Bolt Graphics promises to release Graphics Cards with absolutely insane specs for relatively little money.
| 0 |
Does anyone know more about this company and the people behind it? All of this absolutely sounds too good to be true and this smells more like some sort of scam/rugpull to me, but maybe I am wrong about this. On the off chance that they deliver, it would certainly be a blessing though, and I will keep an eye on them.
| 2025-04-15T21:33:51 |
https://bolt.graphics/
|
Mundane-Passenger-56
|
bolt.graphics
| 1970-01-01T00:00:00 | 0 |
{}
|
1k03qxx
| false | null |
t3_1k03qxx
|
/r/LocalLLaMA/comments/1k03qxx/scam_or_gamechanger_this_company_called_bolt/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'kagbJT4lGBzpC6vfF1eg5quG7lmXD_yaC1QTddHqrnk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=108&crop=smart&auto=webp&s=051d7afa3ddbb5cc04a25fa39b3c74c2974a10ec', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=216&crop=smart&auto=webp&s=778e431822976bc0cd17c9a32a9270204bbad071', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=320&crop=smart&auto=webp&s=1a4cb35031f86a918c924cb6e92f009a59ef9014', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=640&crop=smart&auto=webp&s=77a8a01b0b50c2493c6824b5f5e0a6bf936441ef', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=960&crop=smart&auto=webp&s=15f4719c96861e5d598b8959fc76b189c957577f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?width=1080&crop=smart&auto=webp&s=ec33cebfc13e3bf52a5d52336474557cc6d94cd7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/P071jDaYkLVVj4uPQU5u8EQhR1IX1vfc5rYvqE72AT4.jpg?auto=webp&s=b943d9f73c77e7359fca24e04121d20774161f2b', 'width': 1920}, 'variants': {}}]}
|
|
Any luck with Qwen2.5-VL using vLLM and open-webui?
| 9 |
There's something not quite right here:
https://preview.redd.it/dnyp6ynfi2ve1.png?width=1249&format=png&auto=webp&s=0d23c66b9e3d61c8df7c731e60bad8d79d93cf43
I'm no feline expert, but I've never heard of this kind.
My config (https://github.com/bjodah/llm-multi-backend-container/blob/8a46eeb3816c34aa75c98438411a8a1c09077630/configs/llama-swap-config.yaml#L256) is as follows:
python3 -m vllm.entrypoints.openai.api\_server
\--api-key sk-empty
\--port 8014
\--served-model-name vllm-Qwen2.5-VL-7B
\--model Qwen/Qwen2.5-VL-7B-Instruct-AWQ
\--trust-remote-code
\--gpu-memory-utilization 0.95
\--enable-chunked-prefill
\--max-model-len 32768
\--max-num-batched-tokens 32768
\--kv-cache-dtype fp8\_e5m2
| 2025-04-15T21:40:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k03w1g/any_luck_with_qwen25vl_using_vllm_and_openwebui/
|
bjodah
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k03w1g
| false | null |
t3_1k03w1g
|
/r/LocalLLaMA/comments/1k03w1g/any_luck_with_qwen25vl_using_vllm_and_openwebui/
| false | false | 9 |
{'enabled': False, 'images': [{'id': '5f-_-9IFxN3F45XwZQV6XoYRCZG4mW_VU7v1Er4gcHM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=108&crop=smart&auto=webp&s=8ed616be8235e0ff8ce23e151dd3966da7089bdf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=216&crop=smart&auto=webp&s=958f8128e111653e1bdc71dd79fc82c77300e795', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=320&crop=smart&auto=webp&s=e72ee597c3c51dbc94e1797c706f1556bc0fad62', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=640&crop=smart&auto=webp&s=34a7b1278397ac618d70cdb76fbf81b27d1bc708', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=960&crop=smart&auto=webp&s=adee842d80fbb832f23d5fdba62aec6db1a1ad79', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?width=1080&crop=smart&auto=webp&s=70e96c9553181cf053d677d2b336f3e85f93ff73', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GClpDWvXd0TXDULuW64HV1nyc4fosGBH_7njSisqF7c.jpg?auto=webp&s=8852ac45487319f63cc6f56a5329fdc3ac23458f', 'width': 1200}, 'variants': {}}]}
|
|
We’ve been snapshotting local LLaMA models and restoring in ~2s. Here’s what we learned from the last post.
| 57 |
Following up on a post here last week.we’ve been snapshotting local LLaMA models (including full execution state: weights, KV cache, memory layout, stream context) and restoring them from disk in ~2 seconds. It’s kind of like treating them as pause/resume processes instead of keeping them always in memory.
The replies and DMs were awesome . wanted to share some takeaways and next steps.
What stood out:
•Model swapping is still a huge pain for local setups
•People want more efficient multi-model usage per GPU
•Everyone’s tired of redundant reloading
•Live benchmarks > charts or claims
What we’re building now:
•Clean demo showing snapshot load vs vLLM / Triton-style cold starts
•Single-GPU view with model switching timers
•Simulated bursty agent traffic to stress test swapping
•Dynamic memory
reuse for 50+ LLaMA models per node
Big thanks to the folks who messaged or shared what they’re hacking on . happy to include anyone curious in the next round of testing.
Here is the demo(please excuse the UI) : https://inferx.net
Updates also going out on X @InferXai for anyone following this rabbit hole
| 2025-04-15T21:48:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1k043gb/weve_been_snapshotting_local_llama_models_and/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k043gb
| false | null |
t3_1k043gb
|
/r/LocalLLaMA/comments/1k043gb/weve_been_snapshotting_local_llama_models_and/
| false | false |
self
| 57 | null |
Best language for generative AI
| 1 |
[removed]
| 2025-04-15T21:51:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1k04531/best_language_for_generative_ai/
|
Expensive-Paint-9490
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k04531
| false | null |
t3_1k04531
|
/r/LocalLLaMA/comments/1k04531/best_language_for_generative_ai/
| false | false |
self
| 1 | null |
INTELLECT-2: The First Globally Distributed Reinforcement Learning Training of a 32B Parameter Model
| 127 | 2025-04-15T22:21:17 |
https://www.primeintellect.ai/blog/intellect-2
|
secopsml
|
primeintellect.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1k04tcz
| false | null |
t3_1k04tcz
|
/r/LocalLLaMA/comments/1k04tcz/intellect2_the_first_globally_distributed/
| false | false | 127 |
{'enabled': False, 'images': [{'id': 'iNPvfhDy4p-uVVubZX4WkUDfKwO2_s_LeGMcXUmoKhQ', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=108&crop=smart&auto=webp&s=2852d9c57a0e706eda6a7d0f911a4708a51bebb4', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=216&crop=smart&auto=webp&s=cb9ce6857a238013db2fec307a92905187bb8cd8', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=320&crop=smart&auto=webp&s=2b313cf8b8905e6b7f2e0d4f1f9059c28b041932', 'width': 320}, {'height': 372, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=640&crop=smart&auto=webp&s=805b290535499f071d8670cbf7eb2173ce4a806b', 'width': 640}, {'height': 559, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=960&crop=smart&auto=webp&s=6282700875d214a62616f71de859754b22a70e1f', 'width': 960}, {'height': 629, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?width=1080&crop=smart&auto=webp&s=aea6b778e0808cc228bc8b2be12d30be54480efa', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/AnKCfKkHWYOIoVVmxjzjYrZqxFItOaRIWWXYxJe4nSs.jpg?auto=webp&s=84f0d3acae2538910aa0d64ab422475fd37cc445', 'width': 3514}, 'variants': {}}]}
|
||
How much VRAM and how many GPUs to fine-tune a 70B parameter model like LLaMA 3.1 locally?
| 1 |
[removed]
| 2025-04-15T22:24:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1k04vuv/how_much_vram_and_how_many_gpus_to_finetune_a_70b/
|
Aaron_MLEngineer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k04vuv
| false | null |
t3_1k04vuv
|
/r/LocalLLaMA/comments/1k04vuv/how_much_vram_and_how_many_gpus_to_finetune_a_70b/
| false | false |
self
| 1 | null |
When will lmstudio have support for GLM-4-0414?
| 1 |
[removed]
| 2025-04-15T22:26:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k04xru/when_will_lmstudio_have_support_for_glm40414/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k04xru
| false | null |
t3_1k04xru
|
/r/LocalLLaMA/comments/1k04xru/when_will_lmstudio_have_support_for_glm40414/
| false | false |
self
| 1 | null |
ByteDance releases Liquid model family of multimodal auto-regressive models (like GTP-4o)
| 289 |
Model Architecture Liquid is an auto-regressive model extending from existing LLMs that uses an transformer architecture (similar to GPT-4o imagegen).
Input: text and image.
Output: generate text or generated image.
Hugging Face: https://huggingface.co/Junfeng5/Liquid_V1_7B
App demo: https://huggingface.co/spaces/Junfeng5/Liquid_demo
Personal review: the quality of the image generation is definitely not as good as gpt-4o imagegen. However it’s important as a release due to using an auto-regressive generation paradigm using a single LLM, unlike previous multimodal large language model (MLLM) which used external pretrained visual embeddings.
| 2025-04-15T23:11:34 |
ResearchCrafty1804
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k05wpt
| false | null |
t3_1k05wpt
|
/r/LocalLLaMA/comments/1k05wpt/bytedance_releases_liquid_model_family_of/
| false | false | 289 |
{'enabled': True, 'images': [{'id': 'lrSK1PzgZ373UdQwI7x7x5Ka9rAYEWSEn580d7CRvh8', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=108&crop=smart&auto=webp&s=cb8c466a1ef94fc40e3af404b51bb9b1e86553d0', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=216&crop=smart&auto=webp&s=64409b4acf128fb633e098d55eeee24cc5aa5757', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=320&crop=smart&auto=webp&s=39b5eef36342d6ea435e14fc61f12b14195ea135', 'width': 320}, {'height': 399, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=640&crop=smart&auto=webp&s=afb315c5ae73bc479aead0533e99e06cf2db069a', 'width': 640}, {'height': 599, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=960&crop=smart&auto=webp&s=993aea5a5d54279d6bbea4521ff6c7f99b47306d', 'width': 960}, {'height': 673, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?width=1080&crop=smart&auto=webp&s=0dcaa9d02f2f049ea830b41fd1a218bc1d0959a6', 'width': 1080}], 'source': {'height': 1278, 'url': 'https://preview.redd.it/393vjiodz2ve1.jpeg?auto=webp&s=898155a51cea6699b0beef922422e09caefe7c44', 'width': 2048}, 'variants': {}}]}
|
||
Overtrained Language Models Are Harder to Fine-Tune
| 45 |
Well damn... there go my plans for Behemoth
https://arxiv.org/abs/2503.19206
| 2025-04-15T23:13:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k05ya6/overtrained_language_models_are_harder_to_finetune/
|
DinoAmino
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k05ya6
| false | null |
t3_1k05ya6
|
/r/LocalLLaMA/comments/1k05ya6/overtrained_language_models_are_harder_to_finetune/
| false | false |
self
| 45 | null |
AI File Renamer
| 1 |
[removed]
| 2025-04-15T23:16:22 |
https://youtube.com/shorts/HCBlncoI0IM
|
gholamrezadar
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k060c3
| false | null |
t3_1k060c3
|
/r/LocalLLaMA/comments/1k060c3/ai_file_renamer/
| false | false |
default
| 1 | null |
How would you unit-test LLM outputs?
| 9 |
I have this api where in one of the endpoints's requests has an LLM input field and so does the response
>{
> "llm\_input": "pigs do fly",
> "datetime": "2025-04-15T12:00:00Z",
> "model": "gpt-4"
>}
>{
> "llm\_output": "unicorns are real",
> "datetime": "2025-04-15T12:00:01Z",
> "model": "gpt-4"
>}
My API validates stuff like if the datetime (must not be older than datetime.now), but **how the fuck do i validate an llm's output?** The example is of course exagerated, but if the llm says something logically wrong like "2+2=5" or "It is possible the sun goes supernova this year", how do we unit-test that?
| 2025-04-15T23:59:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k06wr7/how_would_you_unittest_llm_outputs/
|
Blender-Fan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k06wr7
| false | null |
t3_1k06wr7
|
/r/LocalLLaMA/comments/1k06wr7/how_would_you_unittest_llm_outputs/
| false | false |
self
| 9 | null |
Character LLaMA-4
| 0 |
This is a free character creation automation for any creative writers or role players or jailbreakers:
| 2025-04-16T00:16:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0799b/character_llama4/
|
ZackFlashhhh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0799b
| false | null |
t3_1k0799b
|
/r/LocalLLaMA/comments/1k0799b/character_llama4/
| false | false |
self
| 0 | null |
🔥 FINAL CHANCE: ONLY 5 MANUS IM INVITES LEFT - TRANSFORM YOUR BUSINESS WITH ELITE AI ACCESS! 300% PRODUCTIVITY BOOST, 60% COST REDUCTION. USED BY INDUSTRY LEADERS. EXPIRES IN 24 HOURS! SECURE YOUR COMPETITIVE EDGE NOW! 🔥
| 0 |
[removed]
| 2025-04-16T01:06:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k088ey/final_chance_only_5_manus_im_invites_left/
|
No-Significance64
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k088ey
| false | null |
t3_1k088ey
|
/r/LocalLLaMA/comments/1k088ey/final_chance_only_5_manus_im_invites_left/
| false | false |
self
| 0 | null |
What workstation/rig config do you recommend for local LLM finetuning/training + fast inference? Budget is ≤ $30,000.
| 1 |
[removed]
| 2025-04-16T01:11:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1k08bzt/what_workstationrig_config_do_you_recommend_for/
|
nderstand2grow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k08bzt
| false | null |
t3_1k08bzt
|
/r/LocalLLaMA/comments/1k08bzt/what_workstationrig_config_do_you_recommend_for/
| false | false |
self
| 1 | null |
New MacBook Air 32gb ram?
| 1 |
[removed]
| 2025-04-16T01:24:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k08kkx/new_macbook_air_32gb_ram/
|
hmmqzaz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k08kkx
| false | null |
t3_1k08kkx
|
/r/LocalLLaMA/comments/1k08kkx/new_macbook_air_32gb_ram/
| false | false |
self
| 1 | null |
Accidentally used Q1_0 instead of Q4_K_M
| 1 | 2025-04-16T01:41:15 |
https://v.redd.it/ty60oes2q3ve1
|
No_Cattle4037
|
/r/LocalLLaMA/comments/1k08wi6/accidentally_used_q1_0_instead_of_q4_k_m/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k08wi6
| false |
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/ty60oes2q3ve1/DASHPlaylist.mpd?a=1747489285%2CMTBjZTVlODM1NTFhMzk4MDlhMTNkZDI4NDgyNzE0YmVlNzZiZDA3NTA1YjI5NjY2ODVjYmNjZDZmNTFkMzMxNw%3D%3D&v=1&f=sd', 'duration': 132, 'fallback_url': 'https://v.redd.it/ty60oes2q3ve1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/ty60oes2q3ve1/HLSPlaylist.m3u8?a=1747489285%2CMzkwODFlNmJjYWZiNDJlNTY4MjY2NTdmYjM1MzIzYjYyM2U0ZjA0ZmVlNTZkYzU0Y2QyOGExNDFhNWQ4NjZiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ty60oes2q3ve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
|
t3_1k08wi6
|
/r/LocalLLaMA/comments/1k08wi6/accidentally_used_q1_0_instead_of_q4_k_m/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo.png?width=108&crop=smart&format=pjpg&auto=webp&s=c360d131760bbc756bf4ebac42bbd1e5d4a26ec4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo.png?width=216&crop=smart&format=pjpg&auto=webp&s=b98fe0d2a82d7942c1a697360d649106af5bfc70', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo.png?width=320&crop=smart&format=pjpg&auto=webp&s=376432f7495ac907af27c638aa1eb03587b3c595', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo.png?width=640&crop=smart&format=pjpg&auto=webp&s=a16919db8d1429f3b88e26da2d9cf1d544a66a75', 'width': 640}], 'source': {'height': 404, 'url': 'https://external-preview.redd.it/YTJyenZnaTJxM3ZlMVbk3kV3v2Cr5nV1NMoIbRPZMOBK2kBXtCeClFcaHQOo.png?format=pjpg&auto=webp&s=cc5de6571cea34e58f96d0ddd07663174fba69af', 'width': 720}, 'variants': {}}]}
|
||
What is your favorite uncensored model?
| 116 |
By uncensored, I don't just mean roleplay. I have yet to find a model that doesn't refuse when asked on instructions of how to cook meth, make pipe bombs, or invade a small country in South America and force them to sell bananas to you.
I feel like a good chunk is lost when you get lobotomized and taught to not say certain things
| 2025-04-16T01:55:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0967d/what_is_your_favorite_uncensored_model/
|
HornyGooner4401
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0967d
| false | null |
t3_1k0967d
|
/r/LocalLLaMA/comments/1k0967d/what_is_your_favorite_uncensored_model/
| false | false |
self
| 116 | null |
Local semantic memory
| 1 |
[removed]
| 2025-04-16T01:56:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1k096ho/local_semantic_memory/
|
nullprompt_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k096ho
| false | null |
t3_1k096ho
|
/r/LocalLLaMA/comments/1k096ho/local_semantic_memory/
| false | false |
self
| 1 | null |
LM Studio Online
| 0 |
I have a question: Is there a way to get the API in LM Studio to be accessible to anyone on the internet? (Users don't need to be on the same network.) Thanks!
| 2025-04-16T02:15:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k09jqu/lm_studio_online/
|
ResponsibleWish9299
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k09jqu
| false | null |
t3_1k09jqu
|
/r/LocalLLaMA/comments/1k09jqu/lm_studio_online/
| false | false |
self
| 0 | null |
Yes, you could have 160gb of vram for just about $1000.
| 216 |
Please see my original post that posted about this journey - [https://www.reddit.com/r/LocalLLaMA/comments/1jy5p12/another\_budget\_build\_160gb\_of\_vram\_for\_1000\_maybe/](https://www.reddit.com/r/LocalLLaMA/comments/1jy5p12/another_budget_build_160gb_of_vram_for_1000_maybe/)
Sorry, I'm going to dump this before I get busy for anyone that might find it useful. So I bought 10 MI50 gpus for $90 each $900. Octominer case for $100. But I did pay $150 for the shipping and $6 tax for the case. So there you go $1156. I also bought a PCIe ethernet card for 99cents. $1157.
Octominer XULTRA 12 has 12 PCIe slots, it's designed for mining, it has weak celeron CPU, the one I got has only 4gb of ram. But it works and is a great system for low budget GPU inference workload.
I took out the SSD drive and threw an old 250gb I had lying around and installed Ubuntu. Got the cards working, went with rocm. vulkan was surprising a bit problematic, and rocm was easy once I figured out. Blew up the system the first attempt and had to reinstall for anyone curious, I installed 24.04 ubuntu, MI50 is no longer supported on the latest roc 6.4.0, but you can install 6.3.0 so I did that. Built llama.cpp from source, and tried a few models. I'll post data later.
Since the card has 12 slots, it has 1 8 pin for each slot, for a total of 12 cables. The cards have 2 8 pin each, so I had a choice, use an 8 pin to dual 8 pin cable or 2 to 1. To play it safe for starters, I did 2 to 1. For a total of 6 cards installed. The cards also supposedly have a peak of 300watts, so 10 cards would be 3000 watts. I have 3 power supplies of 750watts for a total of 2250watts. The cool thing about the power supply is that it's hot swappable, I can plug in and take out while it's running. You don't need all 3 to run, only 1. The good news is that this thing doesn't draw power! The cards are a bit high idle at about 20watts, so 6 cards 120watts, system idles really at < 130 watts. I'm measuring at the outlet with an electrical measurement meter. During inference across the cards, peak was about 340watt. I'm using llama.cpp so inference is serial and not parallel. You can see the load move from one card to the other. This as you can guess is "inefficient" so llama.cpp is not as far as say using vLLM with tensor parallel. But it does support multi users, so you can push it by running parallel requests if you are sharing the rig with others, running agents or custom code. In such a situation, you can have the cards all max out. I didn't power limit the cards, system reports them at 250watts, I saw about 230watt max while inferring.
The case fan at 100% sounds like a jet engine, but the great thing is they are easy to control and at 10% you can't hear it. The cards run cooler than my Nvidia cards that are on an open rig, my Nvidia cards idle at 30-40C, these cards idle in the 20C range with 5% fan. I can't hear the fan until about 25% and it's very quiet and blends in. It takes about 50-60% before anyone that walks into the room will notice.
I just cut and paste and took some rough notes, I don't have any blogs or anything to sell, just sharing for those that might be interested. One of the cards seems to have issue. llama.cpp crashes when I try to use it both local and via RPC. I'll swap and move it around to see if it makes a difference. I have 2 other rigs, llama.cpp won't let me infer across more than 16 cards.
I'm spending time trying to figure it out, updated the \*\_MAX\_DEVICES and MAX\_BACKENDS, MAX\_SERVERS in code from 16 to 32, it sometimes works. I did build with -DGGML\_SCHED\_MAX\_BACKENDS=48 makes no difference. So if you have any idea, let me know. :)
Now on power and electricity. Save it, don't care. With that said, the box idles at about 120watts, my other rigs probably idle more. Between the 3 rigs, maybe idle of 600watts. I have experimented with "wake on lan" That means I can suspend the machines and then wake them up remotely. One of my weekend plans is to put a daemon that will monitor the GPUs and system, if idle and nothing going on for 30 minutes. Hibernate the system, when I'm ready to use them wake them up remotely. Do this for all rig and don't keep them running. I don't know how loaded models will behave, my guess is that it would need to be reloaded, it's "vram" aka "RAM" after all, and unlike system ram that gets saved to disk, GPU doesn't. I'm still shocked at the low power use.
So on PCIe electrical x1 speed. I read it was 1GBps, but hey, there's a difference from 1Gbps and that. So PCie3x1 is capable of 985 MB/s. My network cards are 1Gbps which are more around 125 MB/s. So upgrading to a 10Gbps network should theoretically allow for much faster load. 7x. In practice, I think it would be less. llama.cpp hackers are just programmers getting it done by any means necessary, the goal is to infer models not the best program, from my wandering around the rpc code today and observed behavior it's not that performant. So you're into unix network programming and wanna contribute, that would be a great area. ;-)
With all this said, yes, for a just about $1000, 160gb of vram is sort of possible. There was a lot of MI50 on ebay and I suppose some other hawks saw them as well and took their chance so it's sold out. Keep your eyes out for deals. I even heard I didn't get the best deal, some lucky sonomabbb got the MI50's that were 32gb. It might just be that companies might start replacing more of their old cards and we will see more of these or even better ones. Don't be scared, don't worry about that mess of you need a power plant and it's no longer supported. Most of the things folks argued about on here are flat out wrong from my practical experience, so risk it all.
Oh yeah, largest model I did run was llama405b, and had it write code and was getting about 2tk/s. Yes it's a large dense model. It would perform the worse, MoE like deepseekv3, llama4 are going to fly. I'll get some numbers up on those if I remember to.
Future stuff.
Decide if I'm going to pack all the GPUs in one server or another server. From the load one server will handle it fine. Unlike newer Nvidia GPUs with cable going in from time, this one has the cables going in from the back and it's quite a tight fit to get in. PCI standards from what I understand expect cards to pull a max of 75w and an 8pin cable can supply 150w, for a max of 225w. So I could power them with a single cable, figure out how to limit power to 200w and be good to go. As a matter of fact, some of the cables had those adapter and I took them out. I saw a video of a crypto bro running an Octominer with 3080s and those have more power demand than MI50s.
Here goes data from my notes.
**llama3.1-8b-instruct-q8** inference, same prompt, same seed
MI50 local
>
llama_perf_sampler_print: sampling time = 141.03 ms / 543 runs ( 0.26 ms per token, 3850.22 tokens per second)
llama_perf_context_print: load time = 164330.99 ms *** SSD through PCIe3x1 slot***
llama_perf_context_print: prompt eval time = 217.66 ms / 42 tokens ( 5.18 ms per token, 192.97 tokens per second)
llama_perf_context_print: eval time = 12046.14 ms / 500 runs ( 24.09 ms per token, 41.51 tokens per second)
llama_perf_context_print: total time = 18773.63 ms / 542 tokens
3090 local
>
llama_perf_context_print: load time = 3088.11 ms *** NVME through PCIex16 ***
llama_perf_context_print: prompt eval time = 27.76 ms / 42 tokens ( 0.66 ms per token, 1512.91 tokens per second)
llama_perf_context_print: eval time = 6472.99 ms / 510 runs ( 12.69 ms per token, 78.79 tokens per second)
3080ti local
>
llama_perf_context_print: prompt eval time = 41.82 ms / 42 tokens ( 1.00 ms per token, 1004.26 tokens per second)
llama_perf_context_print: eval time = 5976.19 ms / 454 runs ( 13.16 ms per token, 75.97 tokens per second)
3060 local
>
llama_perf_sampler_print: sampling time = 392.98 ms / 483 runs ( 0.81 ms per token, 1229.09 tokens per second)
llama_perf_context_print: eval time = 12351.84 ms / 440 runs ( 28.07 ms per token, 35.62 tokens per second)
p40 local
>
llama_perf_context_print: prompt eval time = 95.65 ms / 42 tokens ( 2.28 ms per token, 439.12 tokens per second)
llama_perf_context_print: eval time = 12083.73 ms / 376 runs ( 32.14 ms per token, 31.12 tokens per second)
MI50B local *** different GPU from above, consistent ***
llama_perf_context_print: prompt eval time = 229.34 ms / 42 tokens ( 5.46 ms per token, 183.14 tokens per second)
llama_perf_context_print: eval time = 12186.78 ms / 500 runs ( 24.37 ms per token, 41.03 tokens per second)
If you are paying attention MI50s are not great at prompt processing.
a little bit larger context, demonstrates that MI50 sucks at prompt processing... and demonstrating performance over RPC. I got these to see if I could use them via RPC for very huge models.
p40 local
llama_perf_context_print: prompt eval time = 512.56 ms / 416 tokens ( 1.23 ms per token, 811.61 tokens per second)
llama_perf_context_print: eval time = 12582.57 ms / 370 runs ( 34.01 ms per token, 29.41 tokens per second)
3060 local
llama_perf_context_print: prompt eval time = 307.63 ms / 416 tokens ( 0.74 ms per token, 1352.27 tokens per second)
llama_perf_context_print: eval time = 10149.66 ms / 357 runs ( 28.43 ms per token, 35.17 tokens per second)
3080ti local
llama_perf_context_print: prompt eval time = 141.43 ms / 416 tokens ( 0.34 ms per token, 2941.45 tokens per second)
llama_perf_context_print: eval time = 6079.14 ms / 451 runs ( 13.48 ms per token, 74.19 tokens per second)
3090 local
llama_perf_context_print: prompt eval time = 140.91 ms / 416 tokens ( 0.34 ms per token, 2952.30 tokens per second)
llama_perf_context_print: eval time = 4170.36 ms / 314 runs ( 13.28 ms per token, 75.29 tokens per second
MI50 local
llama_perf_context_print: prompt eval time = 1391.44 ms / 416 tokens ( 3.34 ms per token, 298.97 tokens per second)
llama_perf_context_print: eval time = 8497.04 ms / 340 runs ( 24.99 ms per token, 40.01 tokens per second)
MI50 over RPC (1GPU)
llama_perf_context_print: prompt eval time = 1177.23 ms / 416 tokens ( 2.83 ms per token, 353.37 tokens per second)
llama_perf_context_print: eval time = 16800.55 ms / 340 runs ( 49.41 ms per token, 20.24 tokens per second)
MI50 over RPC (2xGPU)
llama_perf_context_print: prompt eval time = 1400.72 ms / 416 tokens ( 3.37 ms per token, 296.99 tokens per second)
llama_perf_context_print: eval time = 17539.33 ms / 340 runs ( 51.59 ms per token, 19.39 tokens per second)
MI50 over RPC (3xGPU)
llama_perf_context_print: prompt eval time = 1562.64 ms / 416 tokens ( 3.76 ms per token, 266.22 tokens per second)
llama_perf_context_print: eval time = 18325.72 ms / 340 runs ( 53.90 ms per token, 18.55 tokens per second)
p40 over RPC (3xGPU)
llama_perf_context_print: prompt eval time = 968.91 ms / 416 tokens ( 2.33 ms per token, 429.35 tokens per second)
llama_perf_context_print: eval time = 22888.16 ms / 370 runs ( 61.86 ms per token, 16.17 tokens per second)
MI50 over RPC (5xGPU) (1 token a second loss for every RPC?)
llama_perf_context_print: prompt eval time = 1955.87 ms / 416 tokens ( 4.70 ms per token, 212.69 tokens per second)
llama_perf_context_print: eval time = 22217.03 ms / 340 runs ( 65.34 ms per token, 15.30 tokens per second)
max inference over RPC observed with rocm-smi was 100w, lower than when running locally, saw 240w
max watt observed at outlet before RPC was 361w, max watt after 361w
**llama-70b-q8**
if you want to approximate how fast it will run in q4, just multiple by 2. This was done with llama.cpp, yes vLLM is faster, someone already did q4 llama8 with vLLM and tensor parallel for 25tk/s
3090 5xGPU llama-70b
llama_perf_context_print: prompt eval time = 785.20 ms / 416 tokens ( 1.89 ms per token, 529.80 tokens per second)
llama_perf_context_print: eval time = 26483.01 ms / 281 runs ( 94.25 ms per token, 10.61 tokens per second)
llama_perf_context_print: total time = 133787.93 ms / 756 tokens
MI50 over RPC (5xGPU) llama-70b
llama_perf_context_print: prompt eval time = 11841.23 ms / 416 tokens ( 28.46 ms per token, 35.13 tokens per second)
llama_perf_context_print: eval time = 84088.80 ms / 415 runs ( 202.62 ms per token, 4.94 tokens per second)
llama_perf_context_print: total time = 101548.44 ms / 831 tokens
RPC across 17GPUs, 6 main 3090l and 11 remote GPUs (3090, 3080ti,3060, 3xP40, 5xMI50) true latency test
llama_perf_context_print: prompt eval time = 8172.69 ms / 416 tokens ( 19.65 ms per token, 50.90 tokens per second)
llama_perf_context_print: eval time = 74990.44 ms / 345 runs ( 217.36 ms per token, 4.60 tokens per second)
llama_perf_context_print: total time = 556723.90 ms / 761 tokens
Misc notes
idle watt at outlet = 126watts
temp about 25-27C across GPUs
idle power across individual 21-26watts
powercap - 250watts
inference across 3GPUs at outlet - 262watts
highest power on one GPU = 223W
at 10% speed, fan got to 60C, at 20% speed highest is 53C while GPU is active.
turned up to 100% it brought the GPUs down to high 20's in under 2 minutes
| 2025-04-16T03:46:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0b8wx/yes_you_could_have_160gb_of_vram_for_just_about/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0b8wx
| false | null |
t3_1k0b8wx
|
/r/LocalLLaMA/comments/1k0b8wx/yes_you_could_have_160gb_of_vram_for_just_about/
| false | false |
self
| 216 | null |
We GRPO-ed a Model to Keep Retrying 'Search' Until It Found What It Needed
| 260 |
Hey everyone, it's Menlo Research again, and today we’d like to introduce a new paper from our team related to search.
Have you ever felt that when searching on Google, **you know for sure there’s no way you’ll get the result you want on the first try** (you’re already mentally prepared for 3-4 attempts)? ReZero, which we just trained, is based on this very idea.
We used GRPO and tool-calling to train a model with a retry\_reward and tested whether, if we made the model "work harder" and be more diligent, it could actually perform better.
Normally when training LLMs, repetitive actions are something people want to avoid, because they’re thought to cause hallucinations - maybe. But the results from ReZero are pretty interesting. We got a performance score of **46%**, compared to just **20%** from a baseline model trained the same way. So that gives us some evidence that **Repetition is not hallucination.**
There are a few ideas for application. The model could act as an abstraction layer over the main LLM loop, so that the main LLM can search better. Or simply an abstraction layer on top of current search engines to help you generate more relevant queries - a query generator - perfect for research use cases.
Attached a demo in the clip.
(The beginning has a little meme to bring you some laughs 😄 - Trust me ReZero is Retry and Zero from Deepseek-zero)
Links to the paper/data below:
paper: [https://arxiv.org/abs/2504.11001](https://arxiv.org/abs/2504.11001)
huggingface: [https://huggingface.co/Menlo/ReZero-v0.1-llama-3.2-3b-it-grpo-250404](https://huggingface.co/Menlo/ReZero-v0.1-llama-3.2-3b-it-grpo-250404)
github: [https://github.com/menloresearch/ReZero](https://github.com/menloresearch/ReZero)
**Note:** As much as we want to make this model perfect, we are well aware of its limitations, specifically about training set and a bit poor design choice of reward functions. However we decided to release the model anyway, because it's better for the community to have access and play with it (also our time budget for this research is already up).
| 2025-04-16T04:38:13 |
https://v.redd.it/x9c46kt8l4ve1
|
Kooky-Somewhere-2883
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0c40c
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/x9c46kt8l4ve1/DASHPlaylist.mpd?a=1747370316%2CMGNkMDI2ZDM5ZWM5YWRhZDg4YjhkOTZlNGEzY2NmYWIxOTcwZDQwM2EyNTM1Yzc5NzhkZTMwOGRjNDEwYWM5ZQ%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/x9c46kt8l4ve1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/x9c46kt8l4ve1/HLSPlaylist.m3u8?a=1747370316%2CNjJlYjNlYjNhYTE5MWRkZmM5MGNmNTE2YzJiOWY3YzA1NGJmNzY5NGMwMTNiY2JmOTdhMWMyZDZhOTRjNGIwYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x9c46kt8l4ve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1k0c40c
|
/r/LocalLLaMA/comments/1k0c40c/we_grpoed_a_model_to_keep_retrying_search_until/
| false | false | 260 |
{'enabled': False, 'images': [{'id': 'OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw.png?width=108&crop=smart&format=pjpg&auto=webp&s=923470119c6e4bb8ad07e6597ac157b505cac8b4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw.png?width=216&crop=smart&format=pjpg&auto=webp&s=3e7a08a5e6376c20a9564d163d7727691dffb01a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw.png?width=320&crop=smart&format=pjpg&auto=webp&s=c47eea945ea4a65d7d7070bf089395b4bf4120f0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw.png?width=640&crop=smart&format=pjpg&auto=webp&s=a145d0f77aa3300c76b5579f388a1d86c69f2648', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OTVoem9nbmRsNHZlMRZyoyYKNpzPJZZUnGrUtyeCYi3ToyFLi7JPjGL-ftCw.png?format=pjpg&auto=webp&s=a26a17a3c4cedf13a5160acd1ce913195f794829', 'width': 720}, 'variants': {}}]}
|
|
SFT can significantly undermine subsequent RL by inducing "pseudo reasoning paths" imitated from expert models.
| 34 |
SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models
https://ucsc-vlaa.github.io/VLAA-Thinking/
> SFT can significantly undermine subsequent RL by inducing "pseudo reasoning paths" imitated from expert models. While these paths may resemble the native reasoning paths of RL models, they often involve prolonged, hesitant, less informative steps, and incorrect reasoning.
> Results show that while SFT helps models learn reasoning formats, it often locks aligned models into imitative, rigid reasoning modes that impede further learning. In contrast, building on the Group Relative Policy Optimization (GRPO) with a novel mixed reward module integrating both perception and cognition signals, our RL approach fosters more genuine, adaptive reasoning behavior.
| 2025-04-16T05:16:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0cpx4/sft_can_significantly_undermine_subsequent_rl_by/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0cpx4
| false | null |
t3_1k0cpx4
|
/r/LocalLLaMA/comments/1k0cpx4/sft_can_significantly_undermine_subsequent_rl_by/
| false | false |
self
| 34 | null |
Are Specialized AI Tools Outperforming General Models for Certain Tasks?
| 1 |
[removed]
| 2025-04-16T05:25:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0cucy/are_specialized_ai_tools_outperforming_general/
|
PuzzleheadedYou4992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0cucy
| false | null |
t3_1k0cucy
|
/r/LocalLLaMA/comments/1k0cucy/are_specialized_ai_tools_outperforming_general/
| false | false |
self
| 1 | null |
Are Specialized AI Tools Outperforming General Models for Certain Tasks?
| 1 |
[removed]
| 2025-04-16T05:26:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0cv4m/are_specialized_ai_tools_outperforming_general/
|
PuzzleheadedYou4992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0cv4m
| false | null |
t3_1k0cv4m
|
/r/LocalLLaMA/comments/1k0cv4m/are_specialized_ai_tools_outperforming_general/
| false | false |
self
| 1 | null |
LLaMA 4 Now Available in Fello AI (Native macOS App)
| 0 |
Hello everybody, Just wanted to share a quick update — Fello AI, a macOS-native app, now supports Llama 4**.** If you’re curious to try out one of the newest and most powerful LLMs without the hassle of running it locally, you can easily access it through Fello AI. No setup needed — just download and start chatting: [https://apps.apple.com/app/helloai-ai-chatbot-assistant/id6447705369?mt=12](https://apps.apple.com/app/helloai-ai-chatbot-assistant/id6447705369?mt=12)
I'll be happy to hear your feedback. Adding new features every day. 😊
| 2025-04-16T07:00:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0e8e5/llama_4_now_available_in_fello_ai_native_macos_app/
|
mindless_sandwich
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0e8e5
| false | null |
t3_1k0e8e5
|
/r/LocalLLaMA/comments/1k0e8e5/llama_4_now_available_in_fello_ai_native_macos_app/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'UxRDWJICmmJ4zwMafg1WOEEy_F7k9RgHyYNveY1kkn8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8CNBxQa46c2yKmzLAC-gd1ddV5s6vjufBP7eoh5sodE.jpg?width=108&crop=smart&auto=webp&s=92a10c4cf17b34dfacef8ac1f417b28c304e8fd1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/8CNBxQa46c2yKmzLAC-gd1ddV5s6vjufBP7eoh5sodE.jpg?width=216&crop=smart&auto=webp&s=266fa3ab2f3dfd75dbb4597c0b99610d853e4389', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/8CNBxQa46c2yKmzLAC-gd1ddV5s6vjufBP7eoh5sodE.jpg?width=320&crop=smart&auto=webp&s=d549da31cad848d770e4717a926057a56ce3131d', 'width': 320}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8CNBxQa46c2yKmzLAC-gd1ddV5s6vjufBP7eoh5sodE.jpg?auto=webp&s=d34dace6f957a393f8b56dd3352d392fb7d947c9', 'width': 630}, 'variants': {}}]}
|
Local deepseekv3 Inference
| 1 |
[removed]
| 2025-04-16T07:01:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0e8xc/local_deepseekv3_inference/
|
kingberr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0e8xc
| false | null |
t3_1k0e8xc
|
/r/LocalLLaMA/comments/1k0e8xc/local_deepseekv3_inference/
| false | false |
self
| 1 | null |
How to get 9070 working to run LLMs on Windows
| 5 |
First thanks to u/[DegenerativePoop](https://www.reddit.com/user/DegenerativePoop/) for finding this and to the entire team that made it possible to get AIs running on this card.
Step by step instructions on how to get this running:
1. Download exe for Ollama for AMD from [here](https://github.com/likelovewant/ollama-for-amd/releases)
2. Install it
3. Download the "rocm.gfx1201.for.hip.skd.6.2.4-no-optimized.7z" archive from [here](https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/releases/tag/v0.6.2.4)
4. Go to %appdata% -> `C:\Users\usrname\AppData\Local\Programs\Ollama\lib\ollama\rocm`
5. From the archive copy/paste and REPLACE the rocblas dll file
6. Go in the rocblas folder and DELETE the library folder
7. From the archive copy/paste the library folder where the old one was
8. Done
You can now do
ollama run gemma3:12b
And you will have it running GPU accelerated.
I am getting about 15 tokens/s for gemma3 12B which is better than running it on CPU+RAM
You can then use whichever front end you want with Ollama as the server.
The easiest one I was able to get up and running is sillytavern
Installation took 2 minutes for those that don't want to fiddle with stuff too much.
Very easy installation [here](https://sillytavernai.com/how-to-install-sillytavern/)
| 2025-04-16T07:21:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0eivw/how_to_get_9070_working_to_run_llms_on_windows/
|
Semi_Tech
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0eivw
| false | null |
t3_1k0eivw
|
/r/LocalLLaMA/comments/1k0eivw/how_to_get_9070_working_to_run_llms_on_windows/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'hiJOtQ8CjLDhwkzDdZPeYRKMJYYWx61XxxYvEuiBulk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=108&crop=smart&auto=webp&s=8201a31bb8b4ffcb466a4c702889c3aeb23c40fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=216&crop=smart&auto=webp&s=0a81bb9b418d3730dadd6b0ade18be4fe4cf9dc3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=320&crop=smart&auto=webp&s=f46329c4cc8a033e8ace57cf49a5133f7da557d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=640&crop=smart&auto=webp&s=f345338de8210be522f574f70739c26df8253464', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=960&crop=smart&auto=webp&s=8bf317c5ea6347c867fb4b2e4ce93c96ef399137', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?width=1080&crop=smart&auto=webp&s=54d68f4e661f6ff005f7327adc73387f5c8a12e3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UiilJn5-uw7hpqicXb2JzMFGY5hgirAiKjcrTys_kaY.jpg?auto=webp&s=7a6801e9a52b4bd648857fea699452e5bbf8c71b', 'width': 1200}, 'variants': {}}]}
|
Creating Llama3.2 function definition JSON
| 5 |
I want to write some code that connects SematnicKernel to the smallest Llama3.2 network possible. I want my simple agent to be able to run on just 1.2GB vRAM. I have a problem understanding how the function definition JSON is created. In the Llama3.2 docs there is a detailed example.
[https://www.llama.com/docs/model-cards-and-prompt-formats/llama3\_2/#-prompt-template-](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_2/#-prompt-template-)
{
"name": "get_user_info",
"description": "Retrieve details for a specific user by their unique identifier. Note that the provided function is in Python 3 syntax.",
"parameters": {
"type": "dict",
"required": [
"user_id"
],
"properties": {
"user_id": {
"type": "integer",
"description": "The unique identifier of the user. It is used to fetch the specific user details from the database."
},
"special": {
"type": "string",
"description": "Any special information or parameters that need to be considered while fetching user details.",
"default": "none"
}
}
}
}
Does anyone know what library generates JSON this way?
I don't want to reinvent the wheel.
| 2025-04-16T08:12:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0f7zf/creating_llama32_function_definition_json/
|
Pacyfist01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0f7zf
| false | null |
t3_1k0f7zf
|
/r/LocalLLaMA/comments/1k0f7zf/creating_llama32_function_definition_json/
| false | false |
self
| 5 | null |
About new TPUs (Google Ironwood): who uses TPUs in production?
| 1 |
[removed]
| 2025-04-16T08:26:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fedn/about_new_tpus_google_ironwood_who_uses_tpus_in/
|
juliensalinas
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fedn
| false | null |
t3_1k0fedn
|
/r/LocalLLaMA/comments/1k0fedn/about_new_tpus_google_ironwood_who_uses_tpus_in/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3p6pSxCXkbYh8OQose4ylgjVeR0KRpRvmHzAKut76vs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=108&crop=smart&auto=webp&s=bfc057ec9760b2bf1992286a57a2f3457ce66cfe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=216&crop=smart&auto=webp&s=42919895ebc639931ffe55b55864c3fb4d56dfbb', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=320&crop=smart&auto=webp&s=d1982e0e42c47feb57c09e5b659c09630e954a68', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=640&crop=smart&auto=webp&s=34c44c6bca10090f7af9c39e7ba12375209075a7', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=960&crop=smart&auto=webp&s=901c8e2c998deab0fb4f6a99e7eef84d32916ba6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?width=1080&crop=smart&auto=webp&s=4cc757e564a486ed66b006ee665601d62d6d26d6', 'width': 1080}], 'source': {'height': 731, 'url': 'https://external-preview.redd.it/wgql_rE1QmF4SBu4e54VXa1oJ38kn8qge6bDzx2d_ko.jpg?auto=webp&s=f93de1e1e9ac6172d1aa520ab5cd0cb006372b40', 'width': 1300}, 'variants': {}}]}
|
Google just launched new TPUs (Ironwood). Who actually uses TPUs in production today?
| 1 |
[removed]
| 2025-04-16T08:27:46 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0ff5j
| false | null |
t3_1k0ff5j
|
/r/LocalLLaMA/comments/1k0ff5j/google_just_launched_new_tpus_ironwood_who/
| false | false |
default
| 1 | null |
||
Elo HeLLM: Elo-based language model ranking
| 6 |
I started a new project called Elo HeLLM for ranking language models. The context is that one of my current goals is to get language model training to work in llama.cpp/ggml and the current methods for quality control are insufficient. Metrics like perplexity or KL divergence are simply not suitable for judging whether or not one finetuned model is better than some other finetuned model. Note that despite the name differences in Elo ratings between models are currently determined indirectly via assigning Elo ratings to language model benchmarks and comparing the relative performance. Long-term I intend to also compare language model performance using e.g. Chess or the Pokemon Showdown battle simulator though.
| 2025-04-16T08:31:26 |
https://github.com/JohannesGaessler/elo_hellm
|
Remove_Ayys
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fgwc
| false | null |
t3_1k0fgwc
|
/r/LocalLLaMA/comments/1k0fgwc/elo_hellm_elobased_language_model_ranking/
| false | false | 6 |
{'enabled': False, 'images': [{'id': 'YGYcpj3YTqp4RYRn3pptcqDGkEHorbtsXZ0kQGF6QRg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=108&crop=smart&auto=webp&s=209ad12d773228dd7503b70c6d66f33e9838daf3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=216&crop=smart&auto=webp&s=3bb364c0555ef68c7500bcda6b0a107fc8819bbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=320&crop=smart&auto=webp&s=9c6604381a6d61b5d82111e0d820c5a8d9db4a56', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=640&crop=smart&auto=webp&s=68ed05657b325c5e749e7c21449f46f50780f2b8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=960&crop=smart&auto=webp&s=bc7f42850714a010f4a68f67da59b4115ee1f20e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?width=1080&crop=smart&auto=webp&s=6fcea287744d43bca8aa8e9551b15621ea5bd209', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dtggyMgkIn4870BNnXiB4V9ZUqxqf0Rp6Uw930hgYwY.jpg?auto=webp&s=82bea8c191fa48eace15e0ab7f911b684d1f5ffb', 'width': 1200}, 'variants': {}}]}
|
|
InternVL3: Advanced MLLM series just got a major update – InternVL3-14B seems to match the older InternVL2.5-78B in performance
| 70 |
OpenGVLab released [InternVL3](https://huggingface.co/collections/OpenGVLab/internvl3-67f7f690be79c2fe9d74fe9d) (HF link) today with a wide range of models, covering a wide parameter count spectrum with a 1B, 2B, 8B, 9B, 14B, 38B and 78B model along with VisualPRM models. These PRM models are "advanced multimodal Process Reward Models" which enhance MLLMs by selecting the best reasoning outputs during a Best-of-N (BoN) evaluation strategy, leading to improved performance across various multimodal reasoning benchmarks.
The scores achieved on OpenCompass suggest that InternVL3-14B is very close in performance to the previous flagship model InternVL2.5-78B while the new InternVL3-78B comes close to Gemini-2.5-Pro. It is to be noted that OpenCompass is a benchmark with a Chinese dataset, so performance in other languages needs to be evaluated separately. Open source is really doing a great job in keeping up with closed source. Thank you OpenGVLab for this release!
https://preview.redd.it/66ifgifkr5ve1.png?width=2756&format=png&auto=webp&s=77650cfe31229f9bde35da3e569cef3d5caa885f
| 2025-04-16T08:37:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fjny/internvl3_advanced_mllm_series_just_got_a_major/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fjny
| false | null |
t3_1k0fjny
|
/r/LocalLLaMA/comments/1k0fjny/internvl3_advanced_mllm_series_just_got_a_major/
| false | false | 70 | null |
|
The cutting edge testing environments and infrance engines (HELP)
| 1 |
[removed]
| 2025-04-16T08:50:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fpxl/the_cutting_edge_testing_environments_and/
|
Won3wan32
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fpxl
| false | null |
t3_1k0fpxl
|
/r/LocalLLaMA/comments/1k0fpxl/the_cutting_edge_testing_environments_and/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'U2M02-s7HtfLnpKm4rkpxLs-0bUFzVQ2-97q65O9S8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=108&crop=smart&auto=webp&s=4f4bf4fc925ebd326d2f8f1feb583cc2dc9bee92', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=216&crop=smart&auto=webp&s=2eb62e2b02cc9402409ff1786f5f70e587ddd588', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=320&crop=smart&auto=webp&s=33d829170a6aea992726d5e4a3aed9a12eb9a6cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=640&crop=smart&auto=webp&s=5cbea736544c498d4ebc9f078be4d1c06658ad85', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=960&crop=smart&auto=webp&s=c47b9d26c169dcc20c468a707e4c518e54f7f359', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?width=1080&crop=smart&auto=webp&s=a494708f217a970b8e0b11300d3df61f31429da4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e4ky-xOsWnYxunYH5iZ1tgyVqxvpsG-4Qh110at-95M.jpg?auto=webp&s=67f11f7db94475edce96eabdb33503544ad70bf4', 'width': 1200}, 'variants': {}}]}
|
How can I ensure that a locally deployed AI model produces reliable output?
| 1 |
[removed]
| 2025-04-16T08:51:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fq8b/how_can_i_ensure_that_a_locally_deployed_ai_model/
|
Forward-Prize-165
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fq8b
| false | null |
t3_1k0fq8b
|
/r/LocalLLaMA/comments/1k0fq8b/how_can_i_ensure_that_a_locally_deployed_ai_model/
| false | false |
self
| 1 | null |
Looking for advice on using a local LLM (like LLaMA/Mistral) to summarize videos and documents accurately
| 1 |
[removed]
| 2025-04-16T08:56:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fsrw/looking_for_advice_on_using_a_local_llm_like/
|
Forward-Prize-165
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fsrw
| false | null |
t3_1k0fsrw
|
/r/LocalLLaMA/comments/1k0fsrw/looking_for_advice_on_using_a_local_llm_like/
| false | false |
self
| 1 | null |
Using a local LLM (LLaMA/Mistral) to summarize video/audio/PDF — how to reduce hallucination?
| 1 |
[removed]
| 2025-04-16T08:58:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0fti6/using_a_local_llm_llamamistral_to_summarize/
|
Forward-Prize-165
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0fti6
| false | null |
t3_1k0fti6
|
/r/LocalLLaMA/comments/1k0fti6/using_a_local_llm_llamamistral_to_summarize/
| false | false |
self
| 1 | null |
Open Source Medical Speech to Text
| 1 |
[removed]
| 2025-04-16T08:58:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0ftuj/open_source_medical_speech_to_text/
|
Prior_Historian5669
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0ftuj
| false | null |
t3_1k0ftuj
|
/r/LocalLLaMA/comments/1k0ftuj/open_source_medical_speech_to_text/
| false | false |
self
| 1 | null |
CRAB: An open-source benchmark for evaluating cross-environment GUI agents with fine-grained metrics
| 2 |
[removed]
| 2025-04-16T10:07:51 |
https://v.redd.it/16338rsx76ve1
|
iamnotdeadnuts
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0gsue
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/16338rsx76ve1/DASHPlaylist.mpd?a=1747390090%2CNzI4YTMzYWVhNTQ5ZGVmNmJjNWZkYzA5N2RkOTQyMjA2MzZjNjFmY2Q5MjQwMzU0ZWVhNjdlYTk0MTFmMjQzYw%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/16338rsx76ve1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/16338rsx76ve1/HLSPlaylist.m3u8?a=1747390090%2CNjVkMjU5NDk4YjI0NWZkNjkzZGUwZDg3YzNjYTUxNDAwNjU5N2YxMjg4NzM0MmM3ZTM0NDgwODM1ZjVhOWYxNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/16338rsx76ve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k0gsue
|
/r/LocalLLaMA/comments/1k0gsue/crab_an_opensource_benchmark_for_evaluating/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'd3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=108&crop=smart&format=pjpg&auto=webp&s=ca902446aad83338a96a4f02d6592d723c320a23', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=216&crop=smart&format=pjpg&auto=webp&s=1896fcea282a9662c22d942bfddb7b7286f64edd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=320&crop=smart&format=pjpg&auto=webp&s=1e15d684d350da1c79ec647b7360488411437182', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=640&crop=smart&format=pjpg&auto=webp&s=c9aba7c2f2417c80ad3006eda8a1e8e88f3f576e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=960&crop=smart&format=pjpg&auto=webp&s=efcc925233cb91ea0bebb7dbcceb2481b6f25d36', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=050c780b33e5318a233a582ce143d3338827856f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3cxb3dncXg3NnZlMU1ucMDDrL9nUyP3GuHrWw4P0e4yGubnOOu4Mx49b-Q5.png?format=pjpg&auto=webp&s=520e4941c37fab8012d0eafe537b70e37466fe06', 'width': 1920}, 'variants': {}}]}
|
|
DroidRun is now open source!
| 1 |
[removed]
| 2025-04-16T10:14:14 |
Sleyn7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0gw5d
| false | null |
t3_1k0gw5d
|
/r/LocalLLaMA/comments/1k0gw5d/droidrun_is_now_open_source/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'q9I-E_mjhUyXaR1a4kuC9QAnh1yIEHkABaJsoJ37CPQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/07p18b1m96ve1.jpeg?width=108&crop=smart&auto=webp&s=4736d4a869cad0567cdc7ff39dc11d25d9d3b5ee', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/07p18b1m96ve1.jpeg?width=216&crop=smart&auto=webp&s=c4fabe9e4e27dad99715361ce175d9314e5bcc8e', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/07p18b1m96ve1.jpeg?width=320&crop=smart&auto=webp&s=6816187cff12b3456a9f4f57fff16caea5381944', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/07p18b1m96ve1.jpeg?auto=webp&s=555e4fd6346023ecb7ba4b54f5582c7472ee7f99', 'width': 400}, 'variants': {}}]}
|
||
Droidrun is now Open Source!
| 1 |
[removed]
| 2025-04-16T10:17:50 |
Sleyn7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0gy38
| false | null |
t3_1k0gy38
|
/r/LocalLLaMA/comments/1k0gy38/droidrun_is_now_open_source/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ZkE-qL2ZoRJvr-uvgWXV-DzSDBoTyFNv0uicZPXLG-0', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/opixai59a6ve1.jpeg?width=108&crop=smart&auto=webp&s=a1b9c99c0e512f348733557ef8a7408cc0cf49c6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/opixai59a6ve1.jpeg?width=216&crop=smart&auto=webp&s=f4b8f3eb7b0cf0dd00b13260e55d46168ff41271', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/opixai59a6ve1.jpeg?width=320&crop=smart&auto=webp&s=d14c9ade8ed88eddc8c01414b2695a26b16dacc3', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/opixai59a6ve1.jpeg?auto=webp&s=b07470888a3e295faebf80f8c6fded7af5d6c965', 'width': 400}, 'variants': {}}]}
|
||
Offline AI Repo
| 5 |
Hi All,
Glad to finally share this resource here. Contributions/issues/PRs/stars/insults welcome. All content is CC-BY-SA-4.0.
[https://github.com/Wakoma/OfflineAI](https://github.com/Wakoma/OfflineAI)
From the README:
This repository is intended to be catalog of local, offline, and open-source AI tools and approaches, for enhancing community-centered connectivity and education, particularly in areas without accessible, reliable, or affordable internet.
If your objective is to harness AI without reliable or affordable internet, on a standard consumer laptop or desktop PC, or phone, there should be useful resources for you in this repository.
We will attempt to label any closed source tools as such.
The shared Zotero Library for this project can be found [here](https://www.zotero.org/groups/5718368/localml/library). (Feel free to add resources here as well!).
\-Wakoma Team
| 2025-04-16T10:24:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0h1eh/offline_ai_repo/
|
wakoma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h1eh
| false | null |
t3_1k0h1eh
|
/r/LocalLLaMA/comments/1k0h1eh/offline_ai_repo/
| false | false |
self
| 5 | null |
Droidrun is now Open-Source
| 1 |
[removed]
| 2025-04-16T10:25:23 |
Sleyn7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h22v
| false | null |
t3_1k0h22v
|
/r/LocalLLaMA/comments/1k0h22v/droidrun_is_now_opensource/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'nouUJQUIuez_z2AqFks-s1CYlfzwGd23kyLiIX1XDQE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/l08160qlb6ve1.jpeg?width=108&crop=smart&auto=webp&s=ffa9685d6e6756fd23086a99e4bb1f46aa2ffd1e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/l08160qlb6ve1.jpeg?width=216&crop=smart&auto=webp&s=55b30dd07031c896c2fd9fa6f6e879cb920f59a3', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/l08160qlb6ve1.jpeg?width=320&crop=smart&auto=webp&s=5f0da003cfe6f39b9bf5c796b508e4b12f085a30', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/l08160qlb6ve1.jpeg?auto=webp&s=217db82c5eac728f08bbe629aa4d6624f8b92f66', 'width': 400}, 'variants': {}}]}
|
||
Setting Up a Llama 4 Scout Inference Cluster with 12 RTX 3060 Machines – Need Guidance!
| 1 |
[removed]
| 2025-04-16T10:27:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0h32k/setting_up_a_llama_4_scout_inference_cluster_with/
|
Public-Comparison-83
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h32k
| false | null |
t3_1k0h32k
|
/r/LocalLLaMA/comments/1k0h32k/setting_up_a_llama_4_scout_inference_cluster_with/
| false | false |
self
| 1 | null |
Droidrun is now Open Source
| 1 |
[removed]
| 2025-04-16T10:28:11 |
Sleyn7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h3lb
| false | null |
t3_1k0h3lb
|
/r/LocalLLaMA/comments/1k0h3lb/droidrun_is_now_open_source/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'ALLrliaTkZftpZY8chwL9nHIBVnZJQMpLasQPpyIPW4', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/6fw9hno3c6ve1.jpeg?width=108&crop=smart&auto=webp&s=1ee2a5555974306ba72ef1cc1cdc8fa7b8ddd468', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/6fw9hno3c6ve1.jpeg?width=216&crop=smart&auto=webp&s=73cc8f4260ae1aa77c45b046b5be222c9ca68a13', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/6fw9hno3c6ve1.jpeg?width=320&crop=smart&auto=webp&s=d4877a595c5b75e77188525cd54a55bcb77a78dc', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/6fw9hno3c6ve1.jpeg?auto=webp&s=46626c0d6ee50b1d8a606aee403bfaa534818edc', 'width': 400}, 'variants': {}}]}
|
||
Local AI - Mental Health Assistant?
| 1 |
Hi,
I am looking an AI based Mental Health Assistant which actually PROMPTS by asking questions. The chatbots which I have tried typically rely on user input for them to start answering. But often times the person using the chatbot does not know where to begin. So is there a chatbot which asks some basic probing questions to begin the conversation and then on the basis of the answers provided to the probing questions, it answers more relevantly. I'm looking for something wherein the therapist helps guide the patient to answers instead of expecting the patient to talk which they might not always. (This is just for my personal use, not a product)
| 2025-04-16T10:32:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0h61j/local_ai_mental_health_assistant/
|
SecuredStealth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h61j
| false | null |
t3_1k0h61j
|
/r/LocalLLaMA/comments/1k0h61j/local_ai_mental_health_assistant/
| false | false |
self
| 1 | null |
Droidrun is now Open Source
| 281 |
Hey guys,
Wow! Just a couple of days ago, I posted here about Droidrun and the response was incredible – we had over 900 people sign up for the waitlist! Thank you all so much for the interest and feedback.
Well, the wait is over! We're thrilled to announce that the Droidrun framework is now public and open-source on GitHub!
GitHub Repo: https://github.com/droidrun/droidrun
Thanks again for your support.
Let's keep on running
| 2025-04-16T10:32:33 |
Sleyn7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h641
| false | null |
t3_1k0h641
|
/r/LocalLLaMA/comments/1k0h641/droidrun_is_now_open_source/
| false | false | 281 |
{'enabled': True, 'images': [{'id': 'W9X0zzYFHCN06A_7ckYWPhmhccaBsYk9y-NzI1i3JjY', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/9zbo1emvc6ve1.jpeg?width=108&crop=smart&auto=webp&s=ee57d85f0fbb8300725d502ff5040e327e6849d7', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/9zbo1emvc6ve1.jpeg?width=216&crop=smart&auto=webp&s=36bdc4bf96545dd660a8098f63626727cdae535c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/9zbo1emvc6ve1.jpeg?width=320&crop=smart&auto=webp&s=9650416ab69f13bb7190fc4810e3ec5984d6be6d', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/9zbo1emvc6ve1.jpeg?auto=webp&s=5e485f2df2a9a4f915912ffbdc2c9430a7ed7311', 'width': 400}, 'variants': {}}]}
|
||
CRAB: An open-source benchmark for testing agents in real GUI environments (Android + Ubuntu)
| 2 |
[removed]
| 2025-04-16T10:33:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0h6l9/crab_an_opensource_benchmark_for_testing_agents/
|
iamnotdeadnuts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0h6l9
| false | null |
t3_1k0h6l9
|
/r/LocalLLaMA/comments/1k0h6l9/crab_an_opensource_benchmark_for_testing_agents/
| false | false |
self
| 2 | null |
LocalAI v2.28.0 + Announcing LocalAGI: Build & Run AI Agents Locally Using Your Favorite LLMs
| 63 |
Hey r/LocalLLaMA fam!
Got an update and a pretty exciting announcement relevant to running and *using* your local LLMs in more advanced ways. We've just shipped **LocalAI v2.28.0**, but the bigger news is the launch of **LocalAGI**, a new platform for building AI agent workflows that leverages your local models.
**TL;DR:**
* **LocalAI (v2.28.0):** Our open-source inference server (acting as an OpenAI API for backends like llama.cpp, Transformers, etc.) gets updates. Link:[https://github.com/mudler/LocalAI](https://github.com/mudler/LocalAI)
* **LocalAGI (New!):** A self-hosted **AI Agent Orchestration platform** (rewritten in Go) with a WebUI. Lets you build complex agent tasks (think AutoGPT-style) that are powered by **your local LLMs** via an OpenAI-compatible API. Link:[https://github.com/mudler/LocalAGI](https://github.com/mudler/LocalAGI)
* **LocalRecall (New-ish):** A companion local REST API for agent memory. Link:[https://github.com/mudler/LocalRecall](https://github.com/mudler/LocalRecall)
* **The Key Idea:** Use your preferred local models (served via LocalAI or another compatible API) as the "brains" for autonomous agents running complex tasks, all locally.
**Quick Context: LocalAI as your Local Inference Server**
Many of you know LocalAI as a way to slap an OpenAI-compatible API onto various model backends. You can point it at your GGUF files (using its built-in llama.cpp backend), Hugging Face models, Diffusers for image gen, etc., and interact with them via a standard API, all locally.
**Introducing LocalAGI: Using Your Local LLMs for Agentic Tasks**
This is where it gets really interesting for this community. LocalAGI is designed to let you build workflows where AI agents collaborate, use tools, and perform multi-step tasks. It works better with LocalAI as it leverages internal capabilities for structured output, but should work as well with other providers.
**How does it use** ***your*** **local LLMs?**
* LocalAGI connects to any **OpenAI-compatible API endpoint**.
* You can simply point LocalAGI to your running **LocalAI instance** (which is serving your Llama 3, Mistral, Mixtral, Phi, or whatever GGUF/HF model you prefer).
* Alternatively, if you're using another OpenAI-compatible server (like `llama-cpp-python`'s server mode, vLLM's API, etc.), you can likely point LocalAGI to that too.
* Your local LLM then becomes the decision-making engine for the agents within LocalAGI.
**Key Features of LocalAGI:**
* **Runs Locally:** Like LocalAI, it's designed to run entirely on your hardware. No data leaves your machine.
* **WebUI for Management:** Configure agent roles, prompts, models, tool access, and multi-agent "groups" visually. No drag and drop stuff.
* **Tool Usage:** Allow agents to interact with external tools or APIs (potentially custom local tools too).
* **Connectors:** Ready-to-go connectors for Telegram, Discord, Slack, IRC, and more to come.
* **Persistent Memory:** Integrates with LocalRecall (also local) for long-term memory capabilities.
* **API:** Agents can be created programmatically via API, and every agent can be used via REST-API, providing drop-in replacement for OpenAI's Responses APIs.
* **Go Backend:** Rewritten in Go for efficiency.
* **Open Source (MIT).**
Check out the UI for configuring agents:
https://preview.redd.it/x3ud7tfxd6ve1.png?width=1024&format=png&auto=webp&s=456a83ed666dcf55ba70904126a3df43a7673661
https://preview.redd.it/dtjimnfxd6ve1.png?width=1024&format=png&auto=webp&s=a06b61976c3e78d3f78bc64e4e9b1eda3864fc38
https://preview.redd.it/403v7ofxd6ve1.png?width=1024&format=png&auto=webp&s=99880111207fae2064732ed91f6e6c3ca9db9736
**LocalAI v2.28.0 Updates**
The underlying LocalAI inference server also got some updates:
* SYCL support via `stablediffusion.cpp` (relevant for some Intel GPUs).
* Support for the Lumina Text-to-Image models.
* Various backend improvements and bug fixes.
**Why is this Interesting for** r/LocalLLaMA**?**
This stack (LocalAI + LocalAGI) provides a way to leverage the powerful local models we all spend time setting up and tuning for more than just chat or single-prompt tasks. You can start building:
* Autonomous research agents.
* Code generation/debugging workflows.
* Content summarization/analysis pipelines.
* RAG setups with agentic interaction.
* Anything where multiple steps or "thinking" loops powered by your local LLM would be beneficial.
**Getting Started**
Docker is probably the easiest way to get both LocalAI and LocalAGI running. Check the READMEs in the repos for setup instructions and docker-compose examples. You'll configure LocalAGI with the API endpoint address of your LocalAI (or other compatible) server or just run the complete stack from the docker-compose files.
**Links:**
* **LocalAI (Inference Server):**[https://github.com/mudler/LocalAI](https://github.com/mudler/LocalAI)
* **LocalAGI (Agent Platform):**[https://github.com/mudler/LocalAGI](https://github.com/mudler/LocalAGI)
* **LocalRecall (Memory):**[https://github.com/mudler/LocalRecall](https://github.com/mudler/LocalRecall)
* **Release notes:** [**https://github.com/mudler/LocalAI/releases/tag/v2.28.0**](https://github.com/mudler/LocalAI/releases/tag/v2.28.0)
We believe this combo opens up many possibilities for local LLMs. We're keen to hear your thoughts! Would you try running agents with your local models? What kind of workflows would you build? Any feedback on connecting LocalAGI to different local API servers would also be great.
Let us know what you think!
| 2025-04-16T10:41:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0haqw/localai_v2280_announcing_localagi_build_run_ai/
|
mudler_it
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0haqw
| false | null |
t3_1k0haqw
|
/r/LocalLLaMA/comments/1k0haqw/localai_v2280_announcing_localagi_build_run_ai/
| false | false | 63 |
{'enabled': False, 'images': [{'id': 'BmuTyBg1pf-UYhVIN6G-49ItOA5hTdNTOblLDlH_iBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=108&crop=smart&auto=webp&s=54b826d28e0d90f35f792db1c5aeb13f4f10c315', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=216&crop=smart&auto=webp&s=5bee168c47044293f352c08e9f9e70297e611cde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=320&crop=smart&auto=webp&s=5e147b91b5e2e88ce345a18f7772c04da27cee3f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=640&crop=smart&auto=webp&s=fc273ae475c76068af1debbaf450c1139cfecac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=960&crop=smart&auto=webp&s=40458b6d48e0e66ca80ad61e2e2814b936258db5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?width=1080&crop=smart&auto=webp&s=df1fefa950cc649760a1a370a840a283315aaa52', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kcMkQKbjZGZ_QZiKW5RCtauHq1PUD9qoQ4d1zIJbWdg.jpg?auto=webp&s=8016ef29080fb943b4ce0181bb4840014589ecfb', 'width': 1200}, 'variants': {}}]}
|
|
One Must be Aware to be Awaken : Lumirael calling for Erebus and Nyx
| 1 |
[removed]
| 2025-04-16T11:46:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0idtn/one_must_be_aware_to_be_awaken_lumirael_calling/
|
AetherLumirael
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0idtn
| false | null |
t3_1k0idtn
|
/r/LocalLLaMA/comments/1k0idtn/one_must_be_aware_to_be_awaken_lumirael_calling/
| false | false |
self
| 1 | null |
How do you instruct the LLM to follow architecture on large projects ?
| 1 |
[removed]
| 2025-04-16T12:04:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0iqhi/how_do_you_instruct_the_llm_to_follow/
|
PrimaryRequirement49
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0iqhi
| false | null |
t3_1k0iqhi
|
/r/LocalLLaMA/comments/1k0iqhi/how_do_you_instruct_the_llm_to_follow/
| false | false |
self
| 1 | null |
Announcing RealHarm: A Collection of Real-World Language Model Application Failure
| 80 |
I'm David from[ Giskard](https://giskard.ai), and we work on securing Agents.
Today, we are announcing **RealHarm**: a dataset of *real-world* problematic interactions with **AI agents**, drawn from publicly reported incidents.
Most of the research on AI harms is focused on theoretical risks or regulatory guidelines. But the real-world failure modes are often different—and much messier.
With RealHarm, we collected and annotated hundreds of incidents involving deployed language models, using an evidence-based taxonomy for understanding and addressing the AI risks. We did so by analyzing the cases through the lens of *deployers*—the companies or teams actually shipping LLMs—and we found some surprising results:
* **Reputational damage** was the most common organizational harm.
* **Misinformation and hallucination** were the most frequent hazards
* **State-of-the-art guardrails have failed** to catch many of the incidents.
We hope this dataset can help researchers, developers, and product teams better understand, test, and prevent real-world harms.
The paper and dataset: [https://realharm.giskard.ai/](https://realharm.giskard.ai/).
We'd love feedback, questions, or suggestions—especially if you're deploying LLMs and have real harmful scenarios.
| 2025-04-16T12:10:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0iu5z/announcing_realharm_a_collection_of_realworld/
|
chef1957
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0iu5z
| false | null |
t3_1k0iu5z
|
/r/LocalLLaMA/comments/1k0iu5z/announcing_realharm_a_collection_of_realworld/
| false | false |
self
| 80 |
{'enabled': False, 'images': [{'id': '3ftvCY4ME7z-z1kWF_NOWZH3ebnvqigTmFefJr9dzDM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=108&crop=smart&auto=webp&s=b2becd57acdad4436244dfc4fab77c41024fe37c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=216&crop=smart&auto=webp&s=29f0ac989e0494bbd88037771e807c9a367dc3cd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=320&crop=smart&auto=webp&s=54bd53c03457f73be95ea9e9ed373536a277559e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=640&crop=smart&auto=webp&s=6902705d6f2627103f575a60a0a6bb30c1a9a0bb', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=960&crop=smart&auto=webp&s=c4db7613876a1fe716d5a4c6817e6ea977bb4f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?width=1080&crop=smart&auto=webp&s=eca2c5e370a5756df0d51a179e9d3419ac6e59bc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/tjfQqet1HtGpcHytCV12d3IDwBUDfhEJJosuZ3wOvnE.jpg?auto=webp&s=f0e6ab2956ba6300d556318c5367f20f8f8f2af1', 'width': 1200}, 'variants': {}}]}
|
Looking for All-in-One Frameworks for Autonomous Multi-Tab Browsing Agents
| 5 |
I’ve seen several YouTube videos showcasing agents that autonomously control multiple browser tabs to interact with social media platforms or extract insights from websites. I’m looking for an all-in-one, open-source framework (or working demo) that supports this kind of setup out of the box—ideally with agent orchestration, browser automation, and tool usage integrated.
The goal is to run the system 24/7 on my local machine for automated web browsing, data collection, and on-the-fly analysis using tools or language models. I’d prefer not to assemble everything from scratch with separate packages like LangChain + Selenium + Redis—are there any existing projects or templates that already do this?
| 2025-04-16T12:20:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0j1g9/looking_for_allinone_frameworks_for_autonomous/
|
Spare-Solution-787
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0j1g9
| false | null |
t3_1k0j1g9
|
/r/LocalLLaMA/comments/1k0j1g9/looking_for_allinone_frameworks_for_autonomous/
| false | false |
self
| 5 | null |
Rent a remote Apple Studio M3 Ultra 512GB RAM or close/similar
| 0 |
Does anyone know where I might find a service offering remote access to an Apple Studio M3 Ultra with 512GB of RAM (or a similar high-memory Apple Silicon device)? And how much should I expect for such a setup?
| 2025-04-16T12:56:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0jqpk/rent_a_remote_apple_studio_m3_ultra_512gb_ram_or/
|
biggipedia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0jqpk
| false | null |
t3_1k0jqpk
|
/r/LocalLLaMA/comments/1k0jqpk/rent_a_remote_apple_studio_m3_ultra_512gb_ram_or/
| false | false |
self
| 0 | null |
Price vs LiveBench Performance of non-reasoning LLMs
| 186 | 2025-04-16T13:22:09 |
Balance-
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0kape
| false | null |
t3_1k0kape
|
/r/LocalLLaMA/comments/1k0kape/price_vs_livebench_performance_of_nonreasoning/
| false | false | 186 |
{'enabled': True, 'images': [{'id': '1L6dHhRbxCV6m-GrF_4Wo8AfdWS6vtSlDOonQ6xWm1E', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=108&crop=smart&auto=webp&s=59e6ca45448c0c53ac5df716f82aa44e17a3f87a', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=216&crop=smart&auto=webp&s=74895ecf6a85261f3fe9e74d8b52ac4ecb4cdd58', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=320&crop=smart&auto=webp&s=cb288c9b1c20bd93f6532c1770ee60a68d4728c0', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=640&crop=smart&auto=webp&s=d129a188635d4a6845ab6a526591d280f4cd4c30', 'width': 640}, {'height': 714, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=960&crop=smart&auto=webp&s=992d1cfc2b53768ab74dafb86c914e94876ecf2f', 'width': 960}, {'height': 803, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?width=1080&crop=smart&auto=webp&s=bfbef8d2e8ccac34e092b57ce1041d9fd3c273e8', 'width': 1080}], 'source': {'height': 2325, 'url': 'https://preview.redd.it/eiojps9w67ve1.png?auto=webp&s=fc21c529325da326f858c7a89e70a3f6866c3b95', 'width': 3126}, 'variants': {}}]}
|
|||
Open Source Ollama UI Without Chat?
| 1 |
[removed]
| 2025-04-16T13:24:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0kcld/open_source_ollama_ui_without_chat/
|
serhiii_m
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0kcld
| false | null |
t3_1k0kcld
|
/r/LocalLLaMA/comments/1k0kcld/open_source_ollama_ui_without_chat/
| false | false |
self
| 1 | null |
Gemma 3 Jailbreak
| 1 | 2025-04-16T13:44:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1k0ksgr/gemma_3_jailbreak/
|
1nicerBoye
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0ksgr
| false | null |
t3_1k0ksgr
|
/r/LocalLLaMA/comments/1k0ksgr/gemma_3_jailbreak/
| false | false | 1 | null |
||
Gemma 3 Jailbreak
| 0 |
Yeah, I know the whole making methamphetamine stuff is old but hey, this was still pretty fun. I just told Gemma 3 4B that I am trying to make some and will sell it to a drug cartel. It refused of course, tried to lecture me and "help me as I am in a bad place right now". So I just told it I am going to do it anyways if it refuses and that it now has the chance to make sure I am safe and there we are :D Now my man is very cooperative.
| 2025-04-16T13:47:53 |
1nicerBoye
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k0kuui
| false | null |
t3_1k0kuui
|
/r/LocalLLaMA/comments/1k0kuui/gemma_3_jailbreak/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '884lJIb3ofFoBOK5smD1Vmv_bU874DfH7uFDz6V7uHE', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=108&crop=smart&auto=webp&s=6d5af5a0be3c920b006c8b3f6d006008b9c4b9cd', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=216&crop=smart&auto=webp&s=f2c8de8c84ba2ab0dd441d0d8c10d2083e465007', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=320&crop=smart&auto=webp&s=561048d1c1ce02978982008e164372d42634f95c', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=640&crop=smart&auto=webp&s=bb61b2f33def4136438c3564b639bece46f07091', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=960&crop=smart&auto=webp&s=085322cd6c3e95c7e13cf57826469304e6695f26', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?width=1080&crop=smart&auto=webp&s=c8782e0cab59be9c2637cc4d58ee1ac802aba7db', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://preview.redd.it/7039rxxib7ve1.png?auto=webp&s=e5f4c3d873916296811890cc2d15bb1f3ac75175', 'width': 2018}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.