title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Need help for creating website
| 1 |
[removed]
| 2025-04-02T16:52:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpth5m/need_help_for_creating_website/
|
FRENLYFROK
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpth5m
| false | null |
t3_1jpth5m
|
/r/LocalLLaMA/comments/1jpth5m/need_help_for_creating_website/
| false | false |
self
| 1 | null |
Μοντέλο ai για αξιόπιστη συνθεση εκτενούς υλικού απο πηγές.
| 1 |
[removed]
| 2025-04-02T16:52:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpthou/μοντέλο_ai_για_αξιόπιστη_συνθεση_εκτενούς_υλικού/
|
Alexandergreek1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpthou
| false | null |
t3_1jpthou
|
/r/LocalLLaMA/comments/1jpthou/μοντέλο_ai_για_αξιόπιστη_συνθεση_εκτενούς_υλικού/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'du4rebANkTearvB1A-0XQv4WWc0MgzDv3w729MlC-so', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=108&crop=smart&auto=webp&s=9cfc8c4d281cbe0d4b56667d79b21255c5970a2b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=216&crop=smart&auto=webp&s=0976f99fad93625ce5b8673a9998a5c69946e8b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=320&crop=smart&auto=webp&s=0e7336132176be5515ca439dbb679cee5998aac9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=640&crop=smart&auto=webp&s=c4bdda1c04e9d829954f8a02c32db3ab44839992', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=960&crop=smart&auto=webp&s=eedd30ff83d5b979d58167b4409e86394ca5d06b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?width=1080&crop=smart&auto=webp&s=ee6d93755527d459702fec3f8dd27df9de425877', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rpE-bVjKQNBAAgd4_xf2ATDnnam623F9bbDJAbiHW7s.jpg?auto=webp&s=a09217b82f801d0d9e650ce7a837fb094df10d7d', 'width': 1200}, 'variants': {}}]}
|
Μοντέλο ai για αξιόπιστη συνθεση εκτενούς υλικού απο πηγές.
| 1 |
[removed]
| 2025-04-02T16:54:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jptize/μοντέλο_ai_για_αξιόπιστη_συνθεση_εκτενούς_υλικού/
|
Alexandergreek1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jptize
| false | null |
t3_1jptize
|
/r/LocalLLaMA/comments/1jptize/μοντέλο_ai_για_αξιόπιστη_συνθεση_εκτενούς_υλικού/
| false | false |
self
| 1 | null |
PayPal launches remote and local MCP servers
| 17 | 2025-04-02T16:59:43 |
https://mcp.paypal.com
|
init0
|
mcp.paypal.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jptnms
| false | null |
t3_1jptnms
|
/r/LocalLLaMA/comments/1jptnms/paypal_launches_remote_and_local_mcp_servers/
| false | false |
default
| 17 | null |
|
University of Hong Kong releases Dream 7B (Diffusion reasoning model). Highest performing open-source diffusion model to date. You can adjust the number of diffusion timesteps for speed vs accuracy
| 876 | 2025-04-02T17:04:49 |
https://www.reddit.com/gallery/1jptset
|
jd_3d
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jptset
| false | null |
t3_1jptset
|
/r/LocalLLaMA/comments/1jptset/university_of_hong_kong_releases_dream_7b/
| false | false | 876 | null |
||
Has anyone tested FP4 PTQ and QAT vs. FP8 and FP16?
| 2 |
FP4 QAT (a good version of it) should be close to FP8 and even FP16 - if you ask Nvidia or Microsoft.
The problem? - Nvidia and Microsoft tests are based on outdated benchmarks like MMLU and GSM8K etc.
The true test of FP4 (QAT) vs FP8/FP16 should be subjective or multi-faceted outputs like reasoning, planning, coding, explanations etc.
It's quite a narrow ask, but has anyone done testing that can be used to gain a real understanding of where we are with this newer format?
| 2025-04-02T17:19:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpu68r/has_anyone_tested_fp4_ptq_and_qat_vs_fp8_and_fp16/
|
Secure_Archer_1529
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpu68r
| false | null |
t3_1jpu68r
|
/r/LocalLLaMA/comments/1jpu68r/has_anyone_tested_fp4_ptq_and_qat_vs_fp8_and_fp16/
| false | false |
self
| 2 | null |
Now we talking INTELLIGENCE EXPLOSION💥🔅 | ⅕ᵗʰ of benchmark cracked by claude 3.5!
| 103 | 2025-04-02T17:39:04 |
BidHot8598
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpuoh7
| false | null |
t3_1jpuoh7
|
/r/LocalLLaMA/comments/1jpuoh7/now_we_talking_intelligence_explosion_⅕ᵗʰ_of/
| false | false | 103 |
{'enabled': True, 'images': [{'id': '7CkiHQxWiSbsPDM6C3I1ZUedx6mpEBI9mt9U6E4HFd8', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=108&crop=smart&auto=webp&s=10e2878212c792dc2e52c0d10fd3c871295ae834', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=216&crop=smart&auto=webp&s=79d40b17229357ffaf064afd83c6490f30c53421', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=320&crop=smart&auto=webp&s=e23aba68555161ef5e89a103eaa26e2ab1e48487', 'width': 320}, {'height': 263, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=640&crop=smart&auto=webp&s=acbf3232ba93ffa825062a5d7ef599eb46ce434b', 'width': 640}, {'height': 395, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=960&crop=smart&auto=webp&s=0dd42ef90380b90e652f322e255eba68b2b49693', 'width': 960}, {'height': 445, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?width=1080&crop=smart&auto=webp&s=29ea80f2decb2c5ce06f6e0d29d3de6e7d2c4af1', 'width': 1080}], 'source': {'height': 445, 'url': 'https://preview.redd.it/ziowvxg7kgse1.jpeg?auto=webp&s=899a6699af18f536a7a41c58964e5bc15e927d4e', 'width': 1080}, 'variants': {}}]}
|
|||
Sharing HallOumi-8B, an open-source hallucination detector usable with any LLM!
| 62 |
Hi all! I’m one of the co-founders of Oumi, an open-source AI startup, and wanted to share something we’ve been working on.
I find generative AI to be pretty useful, but not that trustworthy. Whenever I ask for a summary of a document, or ask a question about a particular research paper, it always nags in the back of my mind: is this accurate or is it a hallucination? Where in the document does it say this? Personally, I don’t want to have to read pages of a document to verify everything in the LLM output, so we built HallOumi!
Assuming you have a context (one or more documents) and a set of claims (summary, answer to a question, etc.), HallOumi can:
* Classify each claim as supported/unsupported, along with a confidence score
* Provide citations (relevant sentences in the context) for each claim so that you know what exactly you should check in the document to verify as a human
* Provide an explanation for that particular supported/unsupported label - sometimes hallucinations are so nuanced that it is hard even for humans to detect them without help.
We also made a classifier which runs a lot faster at similar quality, but you lose out on claim-level classification, the citations and explanations!
We built a small open-source demo where you can try out HallOumi locally (or any other model you’d like) right away: [https://github.com/oumi-ai/halloumi-demo](https://github.com/oumi-ai/halloumi-demo)
We also have a hosted version online at [https://oumi.ai/halloumi-demo](https://oumi.ai/halloumi-demo)
Sharing all the code and documentation needed to train or run HallOumi here: [https://github.com/oumi-ai/oumi/tree/main/configs/projects/halloumi](https://github.com/oumi-ai/oumi/tree/main/configs/projects/halloumi)
The relevant models and datasets are also on HuggingFace:
* [https://huggingface.co/oumi-ai/HallOumi-8B](https://huggingface.co/oumi-ai/HallOumi-8B)
* [https://huggingface.co/oumi-ai/HallOumi-8B-classifier](https://huggingface.co/oumi-ai/HallOumi-8B-classifier)
* [https://huggingface.co/datasets/oumi-ai/oumi-synthetic-claims](https://huggingface.co/datasets/oumi-ai/oumi-synthetic-claims)
* [https://huggingface.co/datasets/oumi-ai/oumi-synthetic-document-claims](https://huggingface.co/datasets/oumi-ai/oumi-synthetic-document-claims)
* [https://huggingface.co/datasets/oumi-ai/oumi-anli-subset](https://huggingface.co/datasets/oumi-ai/oumi-anli-subset)
* [https://huggingface.co/datasets/oumi-ai/oumi-c2d-d2c-subset](https://huggingface.co/datasets/oumi-ai/oumi-c2d-d2c-subset)
Technical deep dive here: [https://oumi.ai/blog/posts/introducing-halloumi](https://oumi.ai/blog/posts/introducing-halloumi)
Let me know what you think! Happy to answer any questions too 🙂
| 2025-04-02T17:57:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpv5ld/sharing_halloumi8b_an_opensource_hallucination/
|
jeremy_oumi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpv5ld
| false | null |
t3_1jpv5ld
|
/r/LocalLLaMA/comments/1jpv5ld/sharing_halloumi8b_an_opensource_hallucination/
| false | false |
self
| 62 |
{'enabled': False, 'images': [{'id': 'C7cxyKWhr47U-itmXpYFhDFmCOGn23GqJ109eoUxwuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=108&crop=smart&auto=webp&s=66549d53e5d8ae4e2d95a3022400bc2724850b12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=216&crop=smart&auto=webp&s=01cd7d926d71395dc724b0b74e716b21cc8770a4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=320&crop=smart&auto=webp&s=72a31d19a51e8e825fc5b71b56313fbf593ee7a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=640&crop=smart&auto=webp&s=f61a5388ed88cdcb4191bf585de0c865d84f3048', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=960&crop=smart&auto=webp&s=28c4dcecc26ea6ab3b9587df7991ab7bc1604679', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?width=1080&crop=smart&auto=webp&s=99ef90a828bb51ff0d8e383d9aa510fc08071da9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/trHTYDrsoGIlSiUZ4yBioB-5I-pjE9dDByr_nOHOgQU.jpg?auto=webp&s=c95c6aad620962cd5beb773246fb069cee9d8f0c', 'width': 1200}, 'variants': {}}]}
|
I built an open-source toolkit to turn Python functions into agent tools - with support for integration, observability, and management (OpenAI/LangChain/CrewAI compatible). Would love feedback!
| 1 |
[removed]
| 2025-04-02T18:05:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpvcx7/i_built_an_opensource_toolkit_to_turn_python/
|
Fast-Split-857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpvcx7
| false | null |
t3_1jpvcx7
|
/r/LocalLLaMA/comments/1jpvcx7/i_built_an_opensource_toolkit_to_turn_python/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'iCwQpHeH9kpWWWFPF2Af2bbYKIS5os3DyWTzs_JdaP4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=108&crop=smart&auto=webp&s=91aa87813d07dcbab41ce21a732e8e5a4e0da512', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=216&crop=smart&auto=webp&s=35f3fb65ffcbf7efde1471a12410b1c719b79cec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=320&crop=smart&auto=webp&s=6ada602dad080484e27bf1d3dca2ff9a16c6c361', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=640&crop=smart&auto=webp&s=9fc38166d1549c2a1d4c06d04d0810b3490d729a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=960&crop=smart&auto=webp&s=dbffe926ef84f32e2a5ff79f4c6b68cee4ff0c07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?width=1080&crop=smart&auto=webp&s=43fd90e0e47111caf3e4aa97629d5be64e7b0548', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/epz03MLACDyuLNHbUCg8CfR-0lAyDTOMljtfLNuwdhg.jpg?auto=webp&s=a3f5fb0c71ffee649f878bf640829239a241e78c', 'width': 1200}, 'variants': {}}]}
|
What is the best model for generating images?
| 1 |
Hi guys, now with the generation of images using gpt, several ideas came into my head but I wanted to do everything locally, what is the best AI model to generate images locally and what would be the requirements? I've heard about stable diffusion and it's currently the solution that's in my head but I wanted to know if you know of a better one! thanks guys
| 2025-04-02T18:09:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpvgr7/what_is_the_best_model_for_generating_images/
|
rez45gt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpvgr7
| false | null |
t3_1jpvgr7
|
/r/LocalLLaMA/comments/1jpvgr7/what_is_the_best_model_for_generating_images/
| false | false |
self
| 1 | null |
License agreements in HuggingFace and alternative sources for models
| 1 |
I was trying to fine-tune Gemma-3-1B-it (was the first small model that came to my mind) for an idea and had to accept the license agreement. More than a week has passed and my request hasn't been approved.
Is there any other site besides HuggingFace to download models from? If there are, can the files be used for fine-tuning?
| 2025-04-02T18:18:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpvp6j/license_agreements_in_huggingface_and_alternative/
|
Independent_Aside225
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpvp6j
| false | null |
t3_1jpvp6j
|
/r/LocalLLaMA/comments/1jpvp6j/license_agreements_in_huggingface_and_alternative/
| false | false |
self
| 1 | null |
canvas for code and local model
| 1 |
I would like to code javascript and html with local model, what model would you guys recommend, and what front end web interface client can run the code with canvas, using a mac and 48gb
| 2025-04-02T18:21:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpvs32/canvas_for_code_and_local_model/
|
sunole123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpvs32
| false | null |
t3_1jpvs32
|
/r/LocalLLaMA/comments/1jpvs32/canvas_for_code_and_local_model/
| false | false |
self
| 1 | null |
koboldcpp-1.87.1: Merged Qwen2.5VL support! :)
| 75 |
[https://github.com/LostRuins/koboldcpp/releases/tag/v1.87.1](https://github.com/LostRuins/koboldcpp/releases/tag/v1.87.1)
| 2025-04-02T18:27:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpvxw0/koboldcpp1871_merged_qwen25vl_support/
|
Snail_Inference
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpvxw0
| false | null |
t3_1jpvxw0
|
/r/LocalLLaMA/comments/1jpvxw0/koboldcpp1871_merged_qwen25vl_support/
| false | false |
self
| 75 |
{'enabled': False, 'images': [{'id': 'aUbHWkQ3Gw93qoL9ktI4VF1mGjxMfYQn_KvGyN35Dlc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=108&crop=smart&auto=webp&s=6efca7fb11229d7dde993385a35d1c4e28b5f0b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=216&crop=smart&auto=webp&s=1c5346720577ec5f1e40708c6d051814168226ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=320&crop=smart&auto=webp&s=5d63cb4810d9562f193fca6a02ba15167e8e7a43', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=640&crop=smart&auto=webp&s=4bfb1e781adbe710c8c0e1fafd761a8873478986', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=960&crop=smart&auto=webp&s=0b49f5689e5d58bade58ff51c10318fef6a4ad23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?width=1080&crop=smart&auto=webp&s=845154ecd45ac1ffdb1e0d49acaa50b9a142382e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MKTuz0z4XZlYP1kc4ThzBqBQEMECrEtMXBIfMOa5AtM.jpg?auto=webp&s=f392e3718b5779fbe8e62c567a60945cb31341f6', 'width': 1200}, 'variants': {}}]}
|
AI mind reading
| 1 |
[removed]
| 2025-04-02T18:36:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpw5z6/ai_mind_reading/
|
Odd_Beginning_5731
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpw5z6
| false | null |
t3_1jpw5z6
|
/r/LocalLLaMA/comments/1jpw5z6/ai_mind_reading/
| false | false |
self
| 1 | null |
Thinking about running dual 4060TIs 16gb. But is there a way to limit power on linux? Am I going to sweat myself to death in the summer?
| 1 |
Like the title says, i am running linux mint and thinking about upgrading to dual 4070s. it should be a huge upgrade for me. but i would like to be able to limit how much power they draw at least some of the time. even shutting one of them right off when i am not working on LLMs might be good. is this possible and practical? are there any other problems i am not thinking about?
| 2025-04-02T18:46:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpwelq/thinking_about_running_dual_4060tis_16gb_but_is/
|
LanceThunder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpwelq
| false | null |
t3_1jpwelq
|
/r/LocalLLaMA/comments/1jpwelq/thinking_about_running_dual_4060tis_16gb_but_is/
| false | false |
self
| 1 | null |
Best MIT License (from a list of) Models for conversation
| 1 |
[removed]
| 2025-04-02T18:50:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpwi3o/best_mit_license_from_a_list_of_models_for/
|
FearCodeO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpwi3o
| false | null |
t3_1jpwi3o
|
/r/LocalLLaMA/comments/1jpwi3o/best_mit_license_from_a_list_of_models_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'DjrQq0dWZ9XmLf6IYtTknqIjRM_AOIdDZlz6IQgJVtY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=108&crop=smart&auto=webp&s=052aa7fdc645cb5f17655fcb0ea00789ce189283', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=216&crop=smart&auto=webp&s=a66ee6b1f1b8b384af53ad53fb3099dcf52fecd6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=320&crop=smart&auto=webp&s=f3e28d589c9c2ddd43d8efb00d5df516452db14e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=640&crop=smart&auto=webp&s=257590fa617c32afe6b66058b7545e2a1ab2bf06', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=960&crop=smart&auto=webp&s=dde5d1cd21d7e57cebefe8c814a1d075a9338edf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=1080&crop=smart&auto=webp&s=c32b846eb039a73b3e4ce019f2449ba49a081ac3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?auto=webp&s=ab41426d4f3176a30dc567a19cb376fb1c5a42d2', 'width': 1200}, 'variants': {}}]}
|
Best MIT Based (from a list) Models for conversation.
| 1 |
[removed]
| 2025-04-02T19:00:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpwqwd/best_mit_based_from_a_list_models_for_conversation/
|
FearCodeO
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpwqwd
| false | null |
t3_1jpwqwd
|
/r/LocalLLaMA/comments/1jpwqwd/best_mit_based_from_a_list_models_for_conversation/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'DjrQq0dWZ9XmLf6IYtTknqIjRM_AOIdDZlz6IQgJVtY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=108&crop=smart&auto=webp&s=052aa7fdc645cb5f17655fcb0ea00789ce189283', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=216&crop=smart&auto=webp&s=a66ee6b1f1b8b384af53ad53fb3099dcf52fecd6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=320&crop=smart&auto=webp&s=f3e28d589c9c2ddd43d8efb00d5df516452db14e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=640&crop=smart&auto=webp&s=257590fa617c32afe6b66058b7545e2a1ab2bf06', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=960&crop=smart&auto=webp&s=dde5d1cd21d7e57cebefe8c814a1d075a9338edf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?width=1080&crop=smart&auto=webp&s=c32b846eb039a73b3e4ce019f2449ba49a081ac3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rqRNYfyposq6C1voVKH4PoDlpjlOTil_BM16EUcioPM.jpg?auto=webp&s=ab41426d4f3176a30dc567a19cb376fb1c5a42d2', 'width': 1200}, 'variants': {}}]}
|
What are the best value, energy-efficient options with 48GB+ VRAM for AI inference?
| 23 |
I've considered doing dual 3090's, but the power consumption would be a bit much and likely not worth it long-term.
I've heard mention of Apple and others making AI specific machines? Maybe that's an option?
Prices on everything are just sky-high right now. I have a small amount of cash available, but I'd rather not blow it all just so I can talk to my semi-intelligent anime waifu's cough I mean do super important business work. Yeah. That's the real reason...
| 2025-04-02T19:04:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpwup7/what_are_the_best_value_energyefficient_options/
|
PangurBanTheCat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpwup7
| false | null |
t3_1jpwup7
|
/r/LocalLLaMA/comments/1jpwup7/what_are_the_best_value_energyefficient_options/
| false | false |
self
| 23 | null |
LLM amateur with a multi-GPU question. How to optimize for speed?
| 4 |
I want to run DeepSeek-V3-0324. Specifically the 2.71-bit 232GB Q2\_K\_XL version by unsloth. My hardware is the following:
Intel 10980XE 18C/36T @ All-Core OC at 4.8GHz.
256GB DDR4 3600MHz
2x 3090 (48GB VRAM)
2TB Samsung 990 Pro.
LLama.ccp running DeepSeek-V3-0324-UD-Q2\_K\_XL GGUF.
Between RAM and VRAM, I have \~304GB of memory to load the model into. It works, but the most I can get is around 3 T/S.
I have played around with a lot of the settings just in trial and error, but I thought I'd ask how to optimize the speed. How many layers to offload to the GPU? How many threads to use? Split row? BLAS size?
How to optimize for more speed?
FYI: I know it will never be super fast, but if I could increase it slightly to a natural reading speed, that would be nice.
Tips? Thanks.
| 2025-04-02T19:21:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpx9p4/llm_amateur_with_a_multigpu_question_how_to/
|
William-Riker
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpx9p4
| false | null |
t3_1jpx9p4
|
/r/LocalLLaMA/comments/1jpx9p4/llm_amateur_with_a_multigpu_question_how_to/
| false | false |
self
| 4 | null |
DISTILLATION is so underrated. I spent an hour and got a neat improvement in accuracy while keeping the costs low
| 79 | 2025-04-02T19:39:51 |
Ambitious_Anybody855
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpxq4y
| false | null |
t3_1jpxq4y
|
/r/LocalLLaMA/comments/1jpxq4y/distillation_is_so_underrated_i_spent_an_hour_and/
| false | false |
default
| 79 |
{'enabled': True, 'images': [{'id': '0ymxajfb5hse1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=108&crop=smart&auto=webp&s=04e589962882fafd30ec6da55b0c147842d1330f', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=216&crop=smart&auto=webp&s=fdbcd7c0370fafb9c1f09b3535b24c72252ca030', 'width': 216}, {'height': 113, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=320&crop=smart&auto=webp&s=9cbd96e0a7a6ae23c69c51c00faae5e2c57c739d', 'width': 320}, {'height': 226, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=640&crop=smart&auto=webp&s=82c6cdb4e5d31a3f747415575616dee79f009536', 'width': 640}, {'height': 339, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=960&crop=smart&auto=webp&s=bcd8d1700907ce0974d3e9548b275314b2755508', 'width': 960}, {'height': 381, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?width=1080&crop=smart&auto=webp&s=b6dfe0a6eb72bfd5c5783595feb26c7e113f0bf8', 'width': 1080}], 'source': {'height': 398, 'url': 'https://preview.redd.it/0ymxajfb5hse1.png?auto=webp&s=1372ff04c3212e18caed23a691ba340050a3eea6', 'width': 1126}, 'variants': {}}]}
|
||
LLM project ideas? (RAG, Vision, etc.)
| 1 |
[removed]
| 2025-04-02T19:42:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpxs8s/llm_project_ideas_rag_vision_etc/
|
frankh07
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpxs8s
| false | null |
t3_1jpxs8s
|
/r/LocalLLaMA/comments/1jpxs8s/llm_project_ideas_rag_vision_etc/
| false | false |
self
| 1 | null |
I can't goon
| 0 |
I immediately need a new abliterated uncensored model. For free. And open source. And with GPT-4 quality. And And under 7b parameters. And for free. I am not used to paying for the SOTA technologies, so please give them to me for free. Otherwise I will downvote you. Downvotes are scary.
| 2025-04-02T19:58:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpy6d0/i_cant_goon/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpy6d0
| false | null |
t3_1jpy6d0
|
/r/LocalLLaMA/comments/1jpy6d0/i_cant_goon/
| false | false |
self
| 0 | null |
Huggingface SFTTrainer killed gpu
| 0 |
I was using huggingface to fine tune Llama 3.2 1b using LoRa and 4-bit quant on my 3070 and I had started my training run but I noticed that the training loss dropped to 0 after the first report which was odd. I used ctrl + c to stop the training and my pc instantly blue screened with some visual artifacts. I went to restart the computer and now theres no signal to the gpu, although the gpu still lights up.
Has anyone else had something similar happen? I want to know wtf i did so I dont do it again…
| 2025-04-02T20:04:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpycez/huggingface_sfttrainer_killed_gpu/
|
FreewayPineapple
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpycez
| false | null |
t3_1jpycez
|
/r/LocalLLaMA/comments/1jpycez/huggingface_sfttrainer_killed_gpu/
| false | false |
self
| 0 | null |
Best way to do Multi GPU
| 0 |
So, my dad wants me to build him a workstation for LLMs, and he wants to have them go through massive amounts of documents so im gonna need a lot of vram, and I just have a couple questions.
1. Is there anything simple like GPT4ALL that supports both localdocs and multi gpu?
2. If there inst a simple gui app, whats the best way to do this?
3. Do I need to run the GPUs in SLI, or can they be standalone?
| 2025-04-02T20:09:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpyh1h/best_way_to_do_multi_gpu/
|
SalmonSoup15
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpyh1h
| false | null |
t3_1jpyh1h
|
/r/LocalLLaMA/comments/1jpyh1h/best_way_to_do_multi_gpu/
| false | false |
self
| 0 | null |
No comment
| 1 |
[removed]
| 2025-04-02T20:31:00 |
Creative_Bottle_3225
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpyzzc
| false | null |
t3_1jpyzzc
|
/r/LocalLLaMA/comments/1jpyzzc/no_comment/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': '7g7a2rcpehse1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=108&crop=smart&auto=webp&s=fd6af64d90bf54be6c0068af05db4dc3b9da61ca', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=216&crop=smart&auto=webp&s=6071e28feb3d1401e404f777fbcb64b67e6228dc', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=320&crop=smart&auto=webp&s=0b640491babb7a6ba135aeae57aec2ab3173852a', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=640&crop=smart&auto=webp&s=54782d680f80a508d33fd80344a95cb6506d960e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=960&crop=smart&auto=webp&s=cf1f4ed0ef6845e79ece46c5b03f047912173a76', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?width=1080&crop=smart&auto=webp&s=9f204cc2c7277ec1de717c1a5d108fdc0d1b33c9', 'width': 1080}], 'source': {'height': 1836, 'url': 'https://preview.redd.it/7g7a2rcpehse1.jpeg?auto=webp&s=bd6d6f2b55a41ded6a925a319ae5e4f91f0d28d2', 'width': 3264}, 'variants': {}}]}
|
|
GraphRAG for Product Availability Locator - Overkill?
| 1 |
[removed]
| 2025-04-02T20:32:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpz16r/graphrag_for_product_availability_locator_overkill/
|
Intelligent-Term1190
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpz16r
| false | null |
t3_1jpz16r
|
/r/LocalLLaMA/comments/1jpz16r/graphrag_for_product_availability_locator_overkill/
| false | false |
self
| 1 | null |
GraphRAG for Product Availability Locator - Ideal or Overkill?
| 1 |
[removed]
| 2025-04-02T20:35:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpz407/graphrag_for_product_availability_locator_ideal/
|
That-Afternoon-2820
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpz407
| false | null |
t3_1jpz407
|
/r/LocalLLaMA/comments/1jpz407/graphrag_for_product_availability_locator_ideal/
| false | false |
self
| 1 | null |
Are there official (from Google) quantized versions of Gemma 3?
| 4 |
Maybe I am a moron, and can't use search, but I can't find quantized downloads made by **Google** themselves. The best I could find is the Huggingface version in ggml-org, and a few community quants such as bartowski and unsloth.
| 2025-04-02T20:54:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jpzkg3/are_there_official_from_google_quantized_versions/
|
lostmsu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jpzkg3
| false | null |
t3_1jpzkg3
|
/r/LocalLLaMA/comments/1jpzkg3/are_there_official_from_google_quantized_versions/
| false | false |
self
| 4 | null |
Would You Pay $7/Month for an AI Chat Organizer? (Looking for Honest Feedback!)
| 1 |
[removed]
| 2025-04-02T21:41:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq0plj/would_you_pay_7month_for_an_ai_chat_organizer/
|
extensions-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq0plj
| false | null |
t3_1jq0plj
|
/r/LocalLLaMA/comments/1jq0plj/would_you_pay_7month_for_an_ai_chat_organizer/
| false | false |
self
| 1 | null |
Mac Studio M3 Ultra 512GB DeepSeek V3-0324 IQ2_XXS (2.0625 bpw) llamacpp performance
| 45 |
I saw a lot of results that had abysmal tok/sec prompt processing. This is from the self compiled binary of llamacpp, commit f423981a.
./llama-bench -m ~/.lmstudio/models/unsloth/DeepSeek-V3-0324-GGUF/DeepSeek-V3-0324-UD-IQ2_XXS-00001-of-00005.gguf --n-gpu-layers 62 --flash-attn 0 -ctk f16,q8_0 -p 16384,32768,65536 -n 2048 -r 1
| model | size | params | backend | threads | type_k | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | -----: | ------------: | -------------------: |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | f16 | pp16384 | 51.17 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | f16 | pp32768 | 39.80 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | f16 | pp65536 | 467667.08 ± 0.00 | (failed, OOM)
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | f16 | tg2048 | 14.84 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | q8_0 | pp16384 | 50.95 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | q8_0 | pp32768 | 39.53 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | q8_0 | pp65536 | 25.27 ± 0.00 |
| deepseek2 671B IQ2_XXS - 2.0625 bpw | 203.63 GiB | 671.03 B | Metal,BLAS | 24 | q8_0 | tg2048 | 16.09 ± 0.00 |
build: f423981a (5022)
| 2025-04-02T21:57:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq13ik/mac_studio_m3_ultra_512gb_deepseek_v30324_iq2_xxs/
|
WhereIsYourMind
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq13ik
| false | null |
t3_1jq13ik
|
/r/LocalLLaMA/comments/1jq13ik/mac_studio_m3_ultra_512gb_deepseek_v30324_iq2_xxs/
| false | false |
self
| 45 | null |
How to run my RAG system locally?
| 1 |
[removed]
| 2025-04-02T21:59:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq15js/how_to_run_my_rag_system_locally/
|
Cheap-Carpenter5619
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq15js
| false | null |
t3_1jq15js
|
/r/LocalLLaMA/comments/1jq15js/how_to_run_my_rag_system_locally/
| false | false |
self
| 1 | null |
How do I tell the 'llama' command where to install models?
| 1 |
[removed]
| 2025-04-02T22:30:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq1vp2/how_do_i_tell_the_llama_command_where_to_install/
|
ImpossibleBritches
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq1vp2
| false | null |
t3_1jq1vp2
|
/r/LocalLLaMA/comments/1jq1vp2/how_do_i_tell_the_llama_command_where_to_install/
| false | false |
self
| 1 | null |
How to do citations in local Web Search?
| 1 |
[removed]
| 2025-04-02T22:35:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq1ztl/how_to_do_citations_in_local_web_search/
|
tilmx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq1ztl
| false | null |
t3_1jq1ztl
|
/r/LocalLLaMA/comments/1jq1ztl/how_to_do_citations_in_local_web_search/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'KBo9ORh3lTIJ9NEGQmmCT2FfQ21GF68eGGQ71t2P6T4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=108&crop=smart&auto=webp&s=de4f06e9bb690397f4bbcb48a9fd3beee6b9f544', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=216&crop=smart&auto=webp&s=dbfb8238483c96119a317c6f4ce5edafe3a4cd76', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?width=320&crop=smart&auto=webp&s=c6ed96c08f0c69830ac232579b5ca1c9b0112c51', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/nEnxtpibdvh-hB2P2HK5FA7FEhMHoXG7OAG4VfMeMTs.jpg?auto=webp&s=dcfdb108957f99d0de732ff38e921c12e1034f7f', 'width': 400}, 'variants': {}}]}
|
Anyone with experience combining Nvidia system & mac over llama-rpc?
| 5 |
Anyone with experience combining Nvidia system & mac over llama-rpc?
I'm sick of building Nvidia RIGs that are useless with these models. I could manage fine with commandR & MistralLarge, but since llama405B, deepseekv2.5, R1, v3, etc are all out of reach. So I'm thinking of getting an apple next and throwing it on the network. Apple is not cheap either, i"m broke from my Nvidia adventures... so a 128gb would probably be fine. If you have practical experience, please share.
| 2025-04-02T22:54:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq2f3t/anyone_with_experience_combining_nvidia_system/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq2f3t
| false | null |
t3_1jq2f3t
|
/r/LocalLLaMA/comments/1jq2f3t/anyone_with_experience_combining_nvidia_system/
| false | false |
self
| 5 | null |
How do I tell llama-stack where to install models?
| 1 |
[removed]
| 2025-04-02T23:20:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq30m1/how_do_i_tell_llamastack_where_to_install_models/
|
ImpossibleBritches
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq30m1
| false | null |
t3_1jq30m1
|
/r/LocalLLaMA/comments/1jq30m1/how_do_i_tell_llamastack_where_to_install_models/
| false | false |
self
| 1 | null |
ClaudePlaysPokemon Open Sourced - Benchmark AI by letting it play Pokémon
| 102 |
The source code for the AI benchmark ClaudePlaysPokemon has been released. ClaudePlaysPokemon is a benchmark to show how agents work and can generalize, it was made to see how a AI model not trained on Pokemon can use general thinking to play the game.
What I personally would like to see is the open source community taking a small local model like Gemma3 27b and finetuning it on annotated screenshots explaining it what tiles can be cut which ones can only be jumped over from one side etc and maybe general game knowledge from Bulbapedia. This would be a good way to show if a finetuned specialized small model can out perform a general big model.
Source: [https://github.com/davidhershey/ClaudePlaysPokemonStarter](https://github.com/davidhershey/ClaudePlaysPokemonStarter)
Twitch: [https://www.twitch.tv/claudeplayspokemon](https://www.twitch.tv/claudeplayspokemon)
| 2025-04-02T23:26:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq35ba/claudeplayspokemon_open_sourced_benchmark_ai_by/
|
MaruluVR
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq35ba
| false | null |
t3_1jq35ba
|
/r/LocalLLaMA/comments/1jq35ba/claudeplayspokemon_open_sourced_benchmark_ai_by/
| false | false |
self
| 102 |
{'enabled': False, 'images': [{'id': 'Xzfj3xtumaf3BJ3brwtQ47FLbKj9tJrenpYTzb-c9hY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=108&crop=smart&auto=webp&s=5c3675182854ab6287974a3cbc9714e153352d69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=216&crop=smart&auto=webp&s=5c91d483ebc613b29c93e6fb89417c69b9ed1469', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=320&crop=smart&auto=webp&s=d67ebaec2b2dd842f948ba3e4bb3fef7c1b80d42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=640&crop=smart&auto=webp&s=54572d2fd5f5f261b8c3651ce6898675e66fb075', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=960&crop=smart&auto=webp&s=bb099998b8c61b3df0fdfa2e1c8d40fe98edef27', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?width=1080&crop=smart&auto=webp&s=01d5a1702580a73208e44e23dad84f0df2427e04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1EwYlNmMa025A95CQJFP6mKCefFvUVUJ6B75_AiP6f8.jpg?auto=webp&s=13cbc1e4451e7c8aa58225a579b8d82085007abf', 'width': 1200}, 'variants': {}}]}
|
PSA: Guide for Installing Flash Attention 2 on Windows
| 22 |
If you’ve struggled to get Flash Attention 2 working on Windows (for *Oobabooga*’s **text-generation-webui**, for example), I wrote a step-by-step guide after a grueling 15+ hour battle with CUDA, PyTorch, and Visual Studio version hell.
**What’s Inside**:
✅ Downgrading Visual Studio 2022 to LTSC 17.4.x
✅ Fixing CUDA 12.1 + PyTorch 2.5.1 compatibility
✅ Building wheels from source (no official Windows binaries!)
✅ Troubleshooting common errors (out-of-memory, VS version conflicts)
**Why Bother?**
Flash Attention 2 significantly speeds up transformer inference, but Windows support is currently near nonexistent. This guide hopefully fills a bit of the gap.
👉 [Full Guide Here](https://www.reddit.com/r/Oobabooga/comments/1jq3uj9/guide_getting_flash_attention_2_working_on/)
**Note**: If you’re on Linux, just `pip install flash-attn` and move on. For Windows masochists, this may be your lifeline.
| 2025-04-03T00:07:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq41ao/psa_guide_for_installing_flash_attention_2_on/
|
RokHere
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq41ao
| false | null |
t3_1jq41ao
|
/r/LocalLLaMA/comments/1jq41ao/psa_guide_for_installing_flash_attention_2_on/
| false | false |
self
| 22 | null |
Dual RTX 3060 Setup for Deep Learning – PCIe x8/x4 Concerns?
| 1 |
[removed]
| 2025-04-03T01:25:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq5nv5/dual_rtx_3060_setup_for_deep_learning_pcie_x8x4/
|
Ahmedsaed26
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq5nv5
| false | null |
t3_1jq5nv5
|
/r/LocalLLaMA/comments/1jq5nv5/dual_rtx_3060_setup_for_deep_learning_pcie_x8x4/
| false | false |
self
| 1 | null |
LMSYS (LMarena.ai) is highly susceptible to manipulation
| 0 |
Here’s how I see it:
If you're an API provider for a closed LLM, like Gemini, you can set up a simple checker on incoming request traffic. This checker would verify whether the incoming query matches a pre-prepared list of questions. If it does, a flag is raised, indicating that someone has submitted that question, and you can see how your LLM responded. That’s it.
Next, you go to LMSYS, ask the same question, and if the flag is raised, you know exactly which of the two responses came from your LLM. You vote for it. Implementing this is EXTREMELY SIMPLE and COMPLETELY IMPOSSIBLE for LMSYS to track or verify. You wouldn’t even need human intervention—you could create a bot to cycle through the question list and vote accordingly. This way, you could artificially boost your model's ELO rating to any level you want.
So, the immediate question is: What is LMSYS doing to address this issue? The only real solution I see is for LMSYS to host the LLMs themselves, preventing API providers from intercepting requests and responses. However, even this wouldn't solve the problem of certain models being recognizable simply by the way they generate text.
| 2025-04-03T02:16:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq6qlk/lmsys_lmarenaai_is_highly_susceptible_to/
|
Economy_Apple_4617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq6qlk
| false | null |
t3_1jq6qlk
|
/r/LocalLLaMA/comments/1jq6qlk/lmsys_lmarenaai_is_highly_susceptible_to/
| false | false |
self
| 0 | null |
Which model to use to best generate simple 5-word sentence from a given word?
| 0 |
I am creating an automation to generate anki flashcards for a word in new language, the flashcard has the meaning as well as a simple sentence using that word, i'm using deepseek-r1 locally (my RAM is 16gb + 4GB GPU) but it is generating unnecessarily complex sentences. Which open source model is best suited for generating simple conversations so that i can get my sentences?
| 2025-04-03T03:09:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq7sy8/which_model_to_use_to_best_generate_simple_5word/
|
Economy-Inspector-69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq7sy8
| false | null |
t3_1jq7sy8
|
/r/LocalLLaMA/comments/1jq7sy8/which_model_to_use_to_best_generate_simple_5word/
| false | false |
self
| 0 | null |
Currently the most accurate image captioning AI ?
| 7 |
I've tried several as of now that can run on my 6GB VRAM - BLIP, BLIP2, Florence2, Moondream2. They are all good at something but fail at some other task I tried them. For example Moondream can recognize the Eiffel Tower from front, but not from any other angles..
Can anyone recommend any other such AI image captioning models released in the past year that are accurate, short, but detailed ?
| 2025-04-03T03:19:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq7zse/currently_the_most_accurate_image_captioning_ai/
|
cruncherv
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq7zse
| false | null |
t3_1jq7zse
|
/r/LocalLLaMA/comments/1jq7zse/currently_the_most_accurate_image_captioning_ai/
| false | false |
self
| 7 | null |
Open-WebUI Artifacts Overhaul has been updated to v0.6.0!
| 90 |
Hi all! I just wanted to let you know that the Open-WebUI Artifacts Overhaul fork has been updated to match v0.6.0 of Open-Webui!
[https://github.com/nick-tonjum/open-webui-artifacts-overhaul](https://github.com/nick-tonjum/open-webui-artifacts-overhaul)
Don't know what the 'Artifacts Overhaul' branch is? It adds the following to open-webui:
* 🖼️ **Coding Canvas**: Whenever a LLM outputs code, it will appear on the right side of the page with Monaco editor, similar to VSCode. Here you can cycle through different files produced via the LLM and also different versions
* 🔍 **Difference Checker**: If a LLM makes changes to code, the differences will be highlight. This can be easily disabled or enabled via a single click!
* 🎨 **Design Viewer**: Easily toggle between code view and design view with the click of a button! This currently supports HTML/CSS/JavaScript like before, but now with Tailwind styles built in. React components work too!
* ⚛️ **React Visualizer**: As mentioned above, React components work too. This seems to work 80% of the time and I'm working hard to get it 100% of the time! As long as the code block has an export default it should work.
* 💼 **Compacted Code**: When the canvas is open, code blocks in the regular chat are compacted and visualized as an attachment.
* 🌐 **MANY supported languages**
Feel free to check it out. Hopefully someday this will end up in the main branch :)
[Difference Viewer](https://preview.redd.it/7lewes7wojse1.jpg?width=1080&format=pjpg&auto=webp&s=8b3a77a94287acbf414bb38e5e5f934d147ff2df)
[Cycle through multiple files](https://i.redd.it/7wbnf7kwojse1.gif)
[React component viewer](https://preview.redd.it/is93kyswojse1.jpg?width=1080&format=pjpg&auto=webp&s=4502d68ce62e7e656fe050b71aeb0bfdf9fec8fe)
| 2025-04-03T04:11:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq8zfk/openwebui_artifacts_overhaul_has_been_updated_to/
|
maxwell321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq8zfk
| false | null |
t3_1jq8zfk
|
/r/LocalLLaMA/comments/1jq8zfk/openwebui_artifacts_overhaul_has_been_updated_to/
| false | false | 90 |
{'enabled': False, 'images': [{'id': 'TebKun6mXZOP_PT6grb7HduFC2shhPZVp4bvIIBEsP8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=108&crop=smart&auto=webp&s=e4152ac48ea07c9f274b4e43fa424f68c61a17f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=216&crop=smart&auto=webp&s=4260150f1595def38db48eb15848fe5e1c2a9ef6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=320&crop=smart&auto=webp&s=8a81df597476b697f8db415646a02899446fae5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=640&crop=smart&auto=webp&s=784dcdbc7977acbb264691fc234770109ef75318', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=960&crop=smart&auto=webp&s=be07583bd445d0dda63ee3686eb84508ca1df694', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?width=1080&crop=smart&auto=webp&s=db549bcf9f05de555136e1c3dae17fd31844a451', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wZszwQ2U6yJ7pvC1CwegCQ8kArJM8Bhaojq_gfjcMsA.jpg?auto=webp&s=427734baedd5de0fa9a786c6ae8a7871abe622aa', 'width': 1200}, 'variants': {}}]}
|
|
[Feedback wanted] Connect user data to AI with PersonalAgentKit for LangGraph
| 0 |
Hey everyone.
I have been working for the past few months on a SDK to provide LangGraph tools to easily allow users to connect their personal data to applications.
For now, it supports Telegram and Google (Gmail, Calendar, Youtube, Drive etc.) data, but it's open source and designed for anyone to contribute new connectors (Spotify, Slack and others are in progress).
It's called the PersonalAgentKit and currently provides a set of typescript tools for LangGraph.
There is some documentation on the PersonalAgentKit here: [https://docs.verida.ai/integrations/overview](https://docs.verida.ai/integrations/overview) and a demo video showing how to use the LangGraph tools here: [https://docs.verida.ai/integrations/langgraph](https://docs.verida.ai/integrations/langgraph)
I'm keen for developers to have a play and provide some feedback.
| 2025-04-03T04:24:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq97my/feedback_wanted_connect_user_data_to_ai_with/
|
tahpot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq97my
| false | null |
t3_1jq97my
|
/r/LocalLLaMA/comments/1jq97my/feedback_wanted_connect_user_data_to_ai_with/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'hqtelplRus0c4CgiLuC6JBCKLOHO-XofGklXPX3ISbg', 'resolutions': [{'height': 20, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=108&crop=smart&auto=webp&s=3459d22fc124be52013c810e2dee557ecc689e1c', 'width': 108}, {'height': 41, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=216&crop=smart&auto=webp&s=0b69be40001f4ac527540711a5c42322730ad5d7', 'width': 216}, {'height': 60, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=320&crop=smart&auto=webp&s=0c187d09b16716a4b4da432285af0f56056c0c79', 'width': 320}, {'height': 121, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=640&crop=smart&auto=webp&s=9657739aec0a66143233994da82f2b42df2fb532', 'width': 640}, {'height': 182, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=960&crop=smart&auto=webp&s=fd4c1ada088d02bb80fa68e1de75511f94c9670d', 'width': 960}, {'height': 205, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?width=1080&crop=smart&auto=webp&s=9d7a6969f28a08a9aeaee7070f9f3364d17deebb', 'width': 1080}], 'source': {'height': 228, 'url': 'https://external-preview.redd.it/_EI-lDErbLVpxZyuxCXwjOlTmDUDZ_EEg1NQaFZUhv4.jpg?auto=webp&s=33d8d962fd10fbc95e12301e071a95dd8aea87f7', 'width': 1200}, 'variants': {}}]}
|
Follow me by instagram
| 1 |
[removed]
| 2025-04-03T04:36:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jq9f51/follow_me_by_instagram/
|
Calm_Juice8730
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jq9f51
| false | null |
t3_1jq9f51
|
/r/LocalLLaMA/comments/1jq9f51/follow_me_by_instagram/
| false | false |
self
| 1 | null |
Llama 4 will probably suck
| 329 |
I’ve been following meta FAIR research for awhile for my phd application to MILA and now knowing that metas lead ai researcher quit, I’m thinking it happened to dodge responsibility about falling behind basically.
I hope I’m proven wrong of course, but the writing is kinda on the wall.
Meta will probably fall behind and so will Montreal unfortunately 😔
| 2025-04-03T05:12:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqa182/llama_4_will_probably_suck/
|
klapperjak
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqa182
| false | null |
t3_1jqa182
|
/r/LocalLLaMA/comments/1jqa182/llama_4_will_probably_suck/
| false | false |
self
| 329 | null |
Reasoning models as architects, what is missing?
| 0 |
I've been wanting to play around with local reasoning models as architects in Aider, with local non-reasoning models as the coder.
Below is a list of local reasoning models. Two questions: (1) are there any missing models I should consider? (2) What's your experience using reasoning models as architects? Are any better/worse than others?
Incomplete list of reasoning models:
- QwQ-32B
- R1-distills of all sizes
- Llama Nemotron Super 49B and Nemotron Nano 8B
- DeepHermes-Preview
- Reka Flash 3
What am I missing?
| 2025-04-03T05:23:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqa7uj/reasoning_models_as_architects_what_is_missing/
|
RobotRobotWhatDoUSee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqa7uj
| false | null |
t3_1jqa7uj
|
/r/LocalLLaMA/comments/1jqa7uj/reasoning_models_as_architects_what_is_missing/
| false | false |
self
| 0 | null |
Need a chat frontend which supports choosing from available output tokens
| 2 |
I want a GUI for a local LLM chat in which I can change any token arbitrarily both on my side and the assistant side and reprocess from there. This will really help in those cases where I know the AI went in a wrong direction and I want to correct it.
(given our knowledge about slots and shifting of contexts it should even be faster than full reprocessing from the changed words right!?)
This can be done trivially in the API, you simple put words into the mouth of assistant by adding a 'assisstant' 'content' but no GUI supports this AFAIK.
Old llama-server localhost:8080 GUI used to have this option to inspect the top 10 tokens but that too does not allow changing it.
I let gpt-4o make a GUI out of my drawing for this:
https://preview.redd.it/9bic2hsg5kse1.jpg?width=1024&format=pjpg&auto=webp&s=26517db63361756ea887a30592262df8e19246ed
| 2025-04-03T05:44:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqakk3/need_a_chat_frontend_which_supports_choosing_from/
|
Yes_but_I_think
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqakk3
| false | null |
t3_1jqakk3
|
/r/LocalLLaMA/comments/1jqakk3/need_a_chat_frontend_which_supports_choosing_from/
| false | false | 2 | null |
|
Open Sourcing Latent Space Guardrails that catch 43% of Hallucinations
| 156 |
I just released fully open source latent space guardrails that monitor and stop unwelcome outputs of your LLM on the latent space level. Check it out here and happy to adopt it to your use case! [https://github.com/wisent-ai/wisent-guard](https://github.com/wisent-ai/wisent-guard) On hallucinations it has not been trained on in TruthfulQA, this results in a 43% detection of hallucinations just from the activation patterns. You can use them to control the brain of your LLM and block it from outputting bad code, harmful outputs or taking decisions because of gender or racial bias. This is a new approach, different from circuit breakers or SAE-based mechanistic interpretability. We will be releasing a new version of the reasoning architecture based on latent space interventions soon to not only reduce hallucinations but use this for capabilities gain as well!
| 2025-04-03T06:05:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqawj1/open_sourcing_latent_space_guardrails_that_catch/
|
Cautious_Hospital352
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqawj1
| false | null |
t3_1jqawj1
|
/r/LocalLLaMA/comments/1jqawj1/open_sourcing_latent_space_guardrails_that_catch/
| false | false |
self
| 156 |
{'enabled': False, 'images': [{'id': '-usZOwIjTfXDO-6E1kDexa-JGo508WU2SxRy5GMTFZE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=108&crop=smart&auto=webp&s=7613e5e8be271b29b318abcc091625a263180054', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=216&crop=smart&auto=webp&s=17adabd045fbc31bf1c590441b432eb7acb31d89', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=320&crop=smart&auto=webp&s=d537f530cd50a70d9a081d9d3e004289922071a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=640&crop=smart&auto=webp&s=e98377865655ac7ed3716f8a9f48f480742fef1a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=960&crop=smart&auto=webp&s=4eda686efa376b9dcf59a9f08c0418de287c67a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?width=1080&crop=smart&auto=webp&s=59cc270b92bf8ec01a465324466651cf40ac5f9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GIjflT61hEzDvx_IPTR2IUDCzgCsbC2H33eECRd3FaI.jpg?auto=webp&s=66df3edc8cff33806facf035684271f41040242c', 'width': 1200}, 'variants': {}}]}
|
Best tiny/edge model for auto memory retrieval/injection to feed persistent memory from one gpu to a larger model on a second gpu? Weird use case I know, I'm testing my own local front end running react with llama.cpp
| 5 |
Hey r/LocalLLaMA! — I’m building a modular AI frontend called GingerGUI with a dual-model architecture: one lightweight model handles memory creation/retrieval/injection, while a larger model handles core conversational reasoning. Think emotionally-aligned, persistent memory meets local autonomy. Why am I doing this? What's the point? Fuck me if I know, I just had an idea, and its fun bringing it to creation.
Right now, I’m hunting for the **best tiny models to handle the memory part on my second GPU (4060ti)** for:
* Parsing convos and generating JSON-structured memories
* Injecting relevant memories back into prompts
* Running fast & light on a second GPU/core
* Minimal hallucination, clean output
I’ve tried some 1b - 3b models and have seen some hilarious memory hallucinations. Currently llama 3.2 3 b seems to work okay, but I'd love to hear what the community thinks for this usage purpose.
I'll be putting GingerGUI on github once it has a few more features, but I'm having a lot of fun with this dual model memory handling thingy, and until I've got that nailed down I'm keeping things local.
| 2025-04-03T06:12:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqb0u6/best_tinyedge_model_for_auto_memory/
|
Gerdel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqb0u6
| false | null |
t3_1jqb0u6
|
/r/LocalLLaMA/comments/1jqb0u6/best_tinyedge_model_for_auto_memory/
| false | false |
self
| 5 | null |
EXL2 on a workstation with Turing and Ampere GPUs
| 1 |
[removed]
| 2025-04-03T06:20:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqb59m/exl2_on_a_workstation_with_turing_and_ampere_gpus/
|
SectionCrazy5107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqb59m
| false | null |
t3_1jqb59m
|
/r/LocalLLaMA/comments/1jqb59m/exl2_on_a_workstation_with_turing_and_ampere_gpus/
| false | false |
self
| 1 | null |
Switching from 1060 to H100 when training belike
| 1 |
[removed]
| 2025-04-03T06:35:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqbdbk/switching_from_1060_to_h100_when_training_belike/
|
Altruistic_Heat_9531
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbdbk
| false | null |
t3_1jqbdbk
|
/r/LocalLLaMA/comments/1jqbdbk/switching_from_1060_to_h100_when_training_belike/
| false | false | 1 | null |
|
Just asking how good is gemma 3 27b at roleplay
| 0 |
I'm just curious 🤔🤔
| 2025-04-03T06:42:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqbgzc/just_asking_how_good_is_gemma_3_27b_at_roleplay/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbgzc
| false | null |
t3_1jqbgzc
|
/r/LocalLLaMA/comments/1jqbgzc/just_asking_how_good_is_gemma_3_27b_at_roleplay/
| false | false |
self
| 0 | null |
Does (or when) will Openwebui w/ollama API support stable diffusion reasoning models?
| 0 | 2025-04-03T06:42:46 |
StartupTim
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbhdw
| false | null |
t3_1jqbhdw
|
/r/LocalLLaMA/comments/1jqbhdw/does_or_when_will_openwebui_wollama_api_support/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '4iYm6lA2c-O1iujo_iRRlvIydlIyTIISIKHndkBtPJA', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=108&crop=smart&format=png8&s=1ad614e43713b5f3f0a0404a0df722d0175bf7be', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=216&crop=smart&format=png8&s=2b70f4fc71c919677dfe2f94f11a085eb748b520', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=320&crop=smart&format=png8&s=568f255eb3abf54561f1bb29c9def68a9c591eac', 'width': 320}, {'height': 607, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=640&crop=smart&format=png8&s=49baed0f7702773c61e99709faac5af275cd1bf3', 'width': 640}], 'source': {'height': 698, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?format=png8&s=2401f0dec7281725358394af8adf4fe3957e99f0', 'width': 735}, 'variants': {'gif': {'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=108&crop=smart&s=445d293e00d682ee3f7be5fcf5b1847af248f380', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=216&crop=smart&s=08b2ab69975dadb1e0008da5ba3bbe12bcf352a6', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=320&crop=smart&s=76d22d57cfdba2e761b05468e9f38903cdf2d5f8', 'width': 320}, {'height': 607, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=640&crop=smart&s=8192fedb9bdabbb6a1f4641418403a7ce5a8e6c9', 'width': 640}], 'source': {'height': 698, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?s=e38d5ab1d117a0fe3914db3b154dde4ea7d1da0a', 'width': 735}}, 'mp4': {'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=108&format=mp4&s=69e828b43f85287d36c8bfdfb41e65200f653e43', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=216&format=mp4&s=053c6cb847fdc4a26a1022cd5985270a816c86d5', 'width': 216}, {'height': 303, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=320&format=mp4&s=2434f29b5a7eb8faba3ab681f3bbef987f62defd', 'width': 320}, {'height': 607, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?width=640&format=mp4&s=22c82064fbaaacd433d2ebbad9e917bc86d3e909', 'width': 640}], 'source': {'height': 698, 'url': 'https://preview.redd.it/xci0dlo7hgse1.gif?format=mp4&s=af5d1f23f9b6c7fe85bd83b2459a4c1a7d64cfa1', 'width': 735}}}}]}
|
|||
🌸 SPRING SALE! 🌸 10% OFF Sitewide with code FRESH, 20% OFF $200+ with code SPRING20
| 1 | 2025-04-03T06:45:18 |
https://rb.gy/mm3622
|
Amazing-Bee5450
|
rb.gy
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbir2
| false | null |
t3_1jqbir2
|
/r/LocalLLaMA/comments/1jqbir2/spring_sale_10_off_sitewide_with_code_fresh_20/
| false | false |
default
| 1 | null |
|
Simula. A free local Replika-like Chatbot
| 0 |
I just recently released a new Replika-like called [Simula ](https://chatgames.itch.io/simula)on itch.
Features:
Create profiles with a variety of personality types, interests, relationship statuses, and custom background.
Context summarizer to help maintain memory, with the ability to manage your own context length.
Memories that the AI can reference in conversation.
A diary function for more personality over time.
Completely free and runs on your own computer, offline, you manage your data.
If that sounds cool, you can check it out below.
[Simula by ChatGames](https://chatgames.itch.io/simula)
https://preview.redd.it/1muo0ajpjkse1.png?width=1366&format=png&auto=webp&s=4dde856d6c07f3e0f5d924748d1f7d3175fab755
https://preview.redd.it/eiirfzovjkse1.png?width=1919&format=png&auto=webp&s=0d5443556335e6f5c774f56601ff26a165692c11
https://preview.redd.it/5z7wjhqyjkse1.png?width=1855&format=png&auto=webp&s=606ec4e0e1429392ba861c40b3c6178c602e60f7
| 2025-04-03T07:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqbtsk/simula_a_free_local_replikalike_chatbot/
|
Radiant_Dog1937
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbtsk
| false | null |
t3_1jqbtsk
|
/r/LocalLLaMA/comments/1jqbtsk/simula_a_free_local_replikalike_chatbot/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'kAJr_xyZOX0uZY3Nbc7HT4IDJkRM1-er5jpibToT3tc', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/1LgCq9Qf8swggUL_BQfgh32b5C0JaN8Irprsm34Tf6U.jpg?width=108&crop=smart&auto=webp&s=ba0524ed79e24fc7f4c0ccc72ce62848470fba9f', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/1LgCq9Qf8swggUL_BQfgh32b5C0JaN8Irprsm34Tf6U.jpg?width=216&crop=smart&auto=webp&s=041534597b36c996bb8094561ccbe44c48c89a35', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/1LgCq9Qf8swggUL_BQfgh32b5C0JaN8Irprsm34Tf6U.jpg?width=320&crop=smart&auto=webp&s=b7c96d498f1c2498c3bd4d051357fe9be7af9b10', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/1LgCq9Qf8swggUL_BQfgh32b5C0JaN8Irprsm34Tf6U.jpg?auto=webp&s=23dab5a0ab397cc327e70fa0e03b693a6da5c4d4', 'width': 630}, 'variants': {}}]}
|
|
Browser-use - any local LLMs that work?
| 4 |
Hi everyone. Just wondering if anyone is using Browser-use with any local LLMs? In particular is a multimodal model needed? If so what do you use and how has your experience been?
I have a 2 x Rtx 3090 system so have used the common text based models, but haven't tried out multimodal models yet.
Thanks in advance.
| 2025-04-03T07:05:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqbtzy/browseruse_any_local_llms_that_work/
|
ZachCope
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqbtzy
| false | null |
t3_1jqbtzy
|
/r/LocalLLaMA/comments/1jqbtzy/browseruse_any_local_llms_that_work/
| false | false |
self
| 4 | null |
Looking for user interface for roleplay stories
| 0 |
I'm not really sure how/where to look, and I have been out of the llm game for a little bit. I'm aware of silly tavern which sounds perfect, but unfortunately fails in one area.
I'm looking for one with like lorebooks and such, which I'd say is pretty much a necessity for any story based UIs. I also want one where I can put in an API key as opposed to running the model locally (so put in things like open router, etc, or maybe even deepseek as that's quite cheap).
But the biggest requirement, is that it needs to a site/app on mobile, as that's how I'll be using it 95% the time, as I'm looking to transition from Novel AI, as while it is good, it is quite expensive, esp considering it's just a 70B model from last year with 8k context.
I would like for it to somehow link with pc or something, but that isn't too important.
Any help is appreciated :)
| 2025-04-03T07:39:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqcc3g/looking_for_user_interface_for_roleplay_stories/
|
the_doorstopper
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqcc3g
| false | null |
t3_1jqcc3g
|
/r/LocalLLaMA/comments/1jqcc3g/looking_for_user_interface_for_roleplay_stories/
| false | false |
self
| 0 | null |
YourBench: Know which model is the best for your use case in less than 5 min, no matter the topic!
| 127 |
Hi! clefourrier from HF's OpenEvals team!
We open sourced YourBench yesterday, a custom synthetic evaluation framework: from any document, it creates a custom made QA set, then builds a leaderboard on your specific use case.
It works through multiple steps of chunking, summarization, LLM single and multi hop question and answer generation, validation, and so far we've found it works really well to generate interesting QAs!
You can use the demo as is, or customize and download it to run it with your favorite models: Best model for diverse questions is Qwen2.5-32B, and open model generating most grounded/valid questions is Gemma3-27B (just one place below o3-mini)! You can also set several seeds to augment diversity, complexity, etc.
This work has been carried by our intern, Sumuk, who had a great idea on how to dynamically generate eval sets, and we wrote a paper explaining the full method here: https://huggingface.co/papers/2504.01833
Try it out here: https://huggingface.co/spaces/yourbench/demo
TLDR: Document -> custom made evaluation set -> leaderboard in 5 min
| 2025-04-03T07:53:41 |
https://v.redd.it/xy7fgb43rkse1
|
clefourrier
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqcj89
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xy7fgb43rkse1/DASHPlaylist.mpd?a=1746258834%2CZjgyZjgxZDVlOGU3OTM4MTdjYmE2MmYwM2Q0MWMyOTU5YzEyYWZjMjU4MGIwOTA3ZjdjNDRmMzI3ZmI2Yjc3Ng%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/xy7fgb43rkse1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/xy7fgb43rkse1/HLSPlaylist.m3u8?a=1746258834%2COWYyZjBlOWM2ZTdhM2Y4MGFlMTY2MGYzZjAyNzkwYWJmODM3YTU4OWNiMjM1ZWYzNWEzNjYwYjBiYWFkZTQ0ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xy7fgb43rkse1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jqcj89
|
/r/LocalLLaMA/comments/1jqcj89/yourbench_know_which_model_is_the_best_for_your/
| false | false | 127 |
{'enabled': False, 'images': [{'id': 'NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=108&crop=smart&format=pjpg&auto=webp&s=84f8c92794cbf4d2c54af66c97c96191d4c72df3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=216&crop=smart&format=pjpg&auto=webp&s=5e861bd9befcdd04f37c02bc016be1259f6bc5b9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=320&crop=smart&format=pjpg&auto=webp&s=25996789d28cd8f4e26cb9cab6fa808aab0a634b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=640&crop=smart&format=pjpg&auto=webp&s=02c1bf5a806244799d4b24c5a8aa648e426e7022', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=960&crop=smart&format=pjpg&auto=webp&s=6f71078085f9a8779926183065e8b649452ceb84', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7b94b9d9b4975bd967b7eaf53b47620a0cab5b11', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NTZsOHM4NDNya3NlMWK-C_FeILIR1n-iDldyZrl0VqZ3vuhsUq5jbN6auZ0Q.png?format=pjpg&auto=webp&s=ccbf5fcbb48d2848d031316886a0c09e14b0396d', 'width': 1920}, 'variants': {}}]}
|
|
What happened to Zhuiyi Tech (the inventor of RoPE)?
| 2 |
[https://zhuiyi.ai/about/](https://zhuiyi.ai/about/)
It seems like the last official news was dated Dec 2023. What happened to them since then? Are they still in business?
| 2025-04-03T07:58:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqclzh/what_happened_to_zhuiyi_tech_the_inventor_of_rope/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqclzh
| false | null |
t3_1jqclzh
|
/r/LocalLLaMA/comments/1jqclzh/what_happened_to_zhuiyi_tech_the_inventor_of_rope/
| false | false |
self
| 2 | null |
CSM Finetuning is here!
| 36 |
https://github.com/davidbrowne17/csm-streaming
I added fine-tuning to CSM. Clone my repo and place your audio files into a folder called audio_data and run lora.py to finetune it. You will likely need 12gb+ of vram to do it.
| 2025-04-03T08:00:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqcn4q/csm_finetuning_is_here/
|
SovietWarBear17
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqcn4q
| false | null |
t3_1jqcn4q
|
/r/LocalLLaMA/comments/1jqcn4q/csm_finetuning_is_here/
| false | false |
self
| 36 |
{'enabled': False, 'images': [{'id': '_j1Roa_GYxOD6zRX63_hMWvxOT1grIO9c4m65jZZAJw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=108&crop=smart&auto=webp&s=18c6e37ee2cd87ae2f08fb174528187c869537ee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=216&crop=smart&auto=webp&s=750175df4cd3a11319a6d42caf56f6d54ca0a0f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=320&crop=smart&auto=webp&s=a668d1369b0003d1ce5b976ec6fef4e7c73f76f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=640&crop=smart&auto=webp&s=a69ef656fcd052ea297e032dfeb1689fd89202cd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=960&crop=smart&auto=webp&s=e639e75580ca2490744186d627aa1c1b12fae5d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?width=1080&crop=smart&auto=webp&s=c81476c584dbd0a6cf0c79385b36677dcb9b1a7a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uCFqNOTetdPVsALf_IO5NIXh3w4irCMWHli52WkaDhg.jpg?auto=webp&s=0d934a8f7a1291532bdd8bd44788333309ae5b3a', 'width': 1200}, 'variants': {}}]}
|
When chatting with OpenRouter, what's the best way to export and format the chats?
| 1 |
Frr most of my development use cases OpenRouter it has been great to run something quickly against a dozen or so models to find the sweet spot between quality and price for production.
I also love using the openrouter website's chat as my goto chat interface as it allows me to compare responses from different AI's all in one place.
Some of my conversations have been so good that after some editing (mostly deleting the bad responses and keeping the best ones) I'd like to use these documents in training sessions with others.
Here's the challenge Training sessions I run usually are based on PDF instructions and I'd love to extract the OpenRouter chats in a reusable formate. I know there's the JSON expoct but I'd love to get the actual chat window as PDF or similar.
Is there any tool that can import them or use open router with multiple models where I can get well formatted chat's out without having to format them myself?
| 2025-04-03T08:19:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqcwy1/when_chatting_with_openrouter_whats_the_best_way/
|
second-trilogy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqcwy1
| false | null |
t3_1jqcwy1
|
/r/LocalLLaMA/comments/1jqcwy1/when_chatting_with_openrouter_whats_the_best_way/
| false | false |
self
| 1 | null |
I made an (almost?) universal LLM Creator/Trainer for people that don't have much experience coding.
| 1 |
[removed]
| 2025-04-03T08:21:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqcyfl/i_made_an_almost_universal_llm_creatortrainer_for/
|
KiloXii
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqcyfl
| false | null |
t3_1jqcyfl
|
/r/LocalLLaMA/comments/1jqcyfl/i_made_an_almost_universal_llm_creatortrainer_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'k-eskacSNVUwGzi8CEFDjDwNCyEaifozfsqHSl5F4xI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=108&crop=smart&auto=webp&s=209575673356bd60a1f06396f2b360638e25f0f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=216&crop=smart&auto=webp&s=2c200364eaf2fa18752dec4314885222af859c3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=320&crop=smart&auto=webp&s=cd2ffa54844ff0b8f1dbdede481a613e906343f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=640&crop=smart&auto=webp&s=2b477d7c53bd7e2ad8bb5eca779bc8723fc1ca58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=960&crop=smart&auto=webp&s=69bba93371e3520def3d15720d64c110343e06ea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?width=1080&crop=smart&auto=webp&s=09356730acee363854f35a75c6704580665279ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/trDFGCia3J-9yfNLTgxK0mDyN8FU_8kly7_sZ9jdrxY.jpg?auto=webp&s=cede1d3783168a1c10bb6741289431d809328965', 'width': 1200}, 'variants': {}}]}
|
llm Fine-tuning
| 1 |
[removed]
| 2025-04-03T08:43:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqd9us/llm_finetuning/
|
Intelligent-String20
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqd9us
| false | null |
t3_1jqd9us
|
/r/LocalLLaMA/comments/1jqd9us/llm_finetuning/
| false | false |
self
| 1 | null |
Good Model for Quadro P2000 4gb vram + ~32gb ram
| 4 |
I recently upgraded the ram in my homelab and I was wondering how much that could improve the performance of ollama.
I ran some 7b models just fine before with very limited ram, but now I have roughly 32gb of ram (2666mhz) that I can freely use.
Which model would work best with this setup?
| 2025-04-03T08:49:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqdd4m/good_model_for_quadro_p2000_4gb_vram_32gb_ram/
|
Fusion63
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqdd4m
| false | null |
t3_1jqdd4m
|
/r/LocalLLaMA/comments/1jqdd4m/good_model_for_quadro_p2000_4gb_vram_32gb_ram/
| false | false |
self
| 4 | null |
MCP Servers: The New Security Nightmare
| 1 |
[removed]
| 2025-04-03T08:52:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqdehh/mcp_servers_the_new_security_nightmare/
|
Ok_Address_5158
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqdehh
| false | null |
t3_1jqdehh
|
/r/LocalLLaMA/comments/1jqdehh/mcp_servers_the_new_security_nightmare/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'x012-XwFXFYVNP_gZRBHKi9jlfCxZvfVC-hotYENDck', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=108&crop=smart&auto=webp&s=0f4beb45bf020b823768b66699499fb5947b31d1', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=216&crop=smart&auto=webp&s=c9831ea3a897e5be1c13a91005760b5a0ff71b98', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=320&crop=smart&auto=webp&s=ef4c362838cfc5e3650ca33991bc98979cc29468', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=640&crop=smart&auto=webp&s=d86c7f05fb912bce4eb749cee33bc02b31a123f8', 'width': 640}, {'height': 422, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=960&crop=smart&auto=webp&s=05a525391af4b5f44a0846d7d3ec90628fa556d0', 'width': 960}, {'height': 474, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?width=1080&crop=smart&auto=webp&s=a7d0a3c26511e02123aac75e510b3e3114457d70', 'width': 1080}], 'source': {'height': 524, 'url': 'https://external-preview.redd.it/IuATaHuSBHRni5tgGfoVQBTWqUtoxOQhb5Cd4MAefV0.jpg?auto=webp&s=d95311aea7d2fe4186878163219dce5b12a17515', 'width': 1192}, 'variants': {}}]}
|
AutoView - turning your blueprint into UI components (AI Code Generator)
| 2 | 2025-04-03T09:04:19 |
http://wrtnlabs.io/autoview/articles/autoview-turning-your-blueprint-into-ui-components-ai-code-generator.html
|
Flashy-Literature-75
|
wrtnlabs.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqdl1z
| false | null |
t3_1jqdl1z
|
/r/LocalLLaMA/comments/1jqdl1z/autoview_turning_your_blueprint_into_ui/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'JFC3QH3K5t_yhIDSZEaS8j34ZIzckx0mKsr8uZhpnLs', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/ctz6ek5tTc76HiXgX78wMCMzyj-p72BGEn8j7wkmh90.jpg?width=108&crop=smart&auto=webp&s=2f2ef88a7e49411be6e86451d8e1fed5a69f7945', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/ctz6ek5tTc76HiXgX78wMCMzyj-p72BGEn8j7wkmh90.jpg?width=216&crop=smart&auto=webp&s=38cb3d8a01933a35a95e42c3752f38726347affd', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/ctz6ek5tTc76HiXgX78wMCMzyj-p72BGEn8j7wkmh90.jpg?width=320&crop=smart&auto=webp&s=3c189973df46903adde00912c19e00a70182e3af', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/ctz6ek5tTc76HiXgX78wMCMzyj-p72BGEn8j7wkmh90.jpg?width=640&crop=smart&auto=webp&s=bfddd3630652f32794f89a2123e84d0e2e3116fb', 'width': 640}], 'source': {'height': 336, 'url': 'https://external-preview.redd.it/ctz6ek5tTc76HiXgX78wMCMzyj-p72BGEn8j7wkmh90.jpg?auto=webp&s=d9b756e254e7ad91b7dd2d2bcdf9f313e1106213', 'width': 800}, 'variants': {}}]}
|
||
Hoping someone can point me in the right direction...
| 1 |
[removed]
| 2025-04-03T09:20:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqdtmf/hoping_someone_can_point_me_in_the_right_direction/
|
stephensmat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqdtmf
| false | null |
t3_1jqdtmf
|
/r/LocalLLaMA/comments/1jqdtmf/hoping_someone_can_point_me_in_the_right_direction/
| false | false |
self
| 1 | null |
A little reasoning and coherence test with surprising results (for me at least)
| 1 |
Looking at the benchmarks in recent years, and especially in the past months, I am amazed that LLMs (reasoning or not) achieve high scores in tests where most humans – even those somewhat interested in the topic, to professionals – would not. I mean, expert humans can still have an edge, but the rest of the human population would struggle to achieve the same performance in the same time.
- competitive math for high school or undergrads
- competitive coding
- general coding (human eval and so on)
- general math that is still not too easy (math eval and so on)
Now I know that LLMs have a lot of knowledge (though, AFAIK, not necessarily lossless) within them, and the test text "hints" them to pick the right pieces. But even for coding or some math problems, they still need to put those pieces together well. And they do. It’s impressive, considering they tend to predict the next token based on the previous ones.
Then I thought, "well, if these models ace such tests, surely they can easily solve simple logic games." I was surprised by the results. I was expecting non-reasoning LLMs to solve the test (as they solve math and coding tests), especially Claude, but practically only reasoning models (and not all of them) can.
The test can be made even harder (or easier, with a better prompt). My question isn’t hard at all, as it has many clues. Though my impression is that it needs coherence to be solved, if the LLMs write to much and go astray, it is difficult that they recover.
Here the test:
> let's play a game of mastermind or "bulls and cows" with numbers.
> you can use the numbers from 1 to 9 once, and there are only 4 digits that compose the secret code.
> The secret code could be something like 1746.
> Someone did the following attempts, that were marked with "X correct and Y wrong" (X digits in the correct position, Y digits in the wrong position). Could you identify the secret code from this information? Please think step by step and propose a solution.
> 1234: 2 wrong
> 5678: 2 correct
> 5718: 1 correct 2 wrong
> 9261: 1 correct
> 5712: 1 correct, 2 wrong
> 1829: 1 wrong
> 7612: 2 wrong
> 3127: 3 wrong
> 5876: 2 correct
Here the results (with at least 1 attempt). I tried to pick the models I could find following the "hard prompts" category of lmarena - that is helpful more often than not.
### solved
- claude-3-7-sonnet-20250219-thinking-32k
- deepseek-r1
- gemini-2.5-pro-exp-03-25
- grok-3 thinking (grok com 2025-04-03)
- o1-2024-12-17
- o3-mini (medium)
- qwq-32b
### failed (some with a lots of yapping but little results)
- amazon-nova-pro-v1.0
- chatgpt-4o-latest-20250129
- chatgpt-4o-latest-20250326
- claude-3-5-haiku-20241022
- claude-3-5-sonnet-20241022
- claude-3-7-sonnet-20250219
- command-a-03-2025
- deepseek-v3-0324
- gemini-1.5-pro-002
- gemini-2.0-flash-001
- gemini-2.0-flash-lite-preview-02-05
- Gemini-2.0-Flash-Thinking-Exp-01-21
- gemini-2.0-pro-exp-02-05
- gemma 3 27b
- glm-4-plus-0111
- gpt-4o-2024-11-20
- gpt-4o-mini-2024-07-18
- grok-3 (grok com 2025-04-03)
- hunyuan-turbos-20250226
- llama-3.1-405b-instruct-bf16
- llama-3.1-nemotron-70b-instruct
- llama-3.3-70b-instruct
- mistral-large-2411
- mistral-small-24b-instruct-2501
- olmo-2-0325-32b-instruct
- phi-4
- qwen-2.5-max (2025-04-03 qwen-ai com)
- qwen-2.5-max-thinking (2025-04-03 qwen-ai com)
- qwen-max-2025-01-25
- qwen-plus-0125-exp
- qwen2.5-coder-32b-instruct
- qwen2.5-plus-1127
- reka-core-20240904
- step-2-16k-202502
| 2025-04-03T09:51:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqeahi/a_little_reasoning_and_coherence_test_with/
|
pier4r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqeahi
| false | null |
t3_1jqeahi
|
/r/LocalLLaMA/comments/1jqeahi/a_little_reasoning_and_coherence_test_with/
| false | false |
self
| 1 | null |
Mamba vs Transformers - Research Directions?
| 1 |
[removed]
| 2025-04-03T09:52:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqeas4/mamba_vs_transformers_research_directions/
|
HypoSlyper
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqeas4
| false | null |
t3_1jqeas4
|
/r/LocalLLaMA/comments/1jqeas4/mamba_vs_transformers_research_directions/
| false | false |
self
| 1 | null |
[2503.24187] NeuRaLaTeX: A machine learning library written in pure LaTeX
| 1 | 2025-04-03T09:56:04 |
https://arxiv.org/abs/2503.24187
|
Elven77AI
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqecs7
| false | null |
t3_1jqecs7
|
/r/LocalLLaMA/comments/1jqecs7/250324187_neuralatex_a_machine_learning_library/
| false | false |
default
| 1 | null |
|
China modded 48 GB RTX 4090 training video models at 720p with excellent speed and sold cheaper than RTX 5090 (only 32 GB) - Batch size 4
| 336 | 2025-04-03T10:00:10 |
CeFurkan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqef4d
| false | null |
t3_1jqef4d
|
/r/LocalLLaMA/comments/1jqef4d/china_modded_48_gb_rtx_4090_training_video_models/
| false | false | 336 |
{'enabled': True, 'images': [{'id': 'EOxT5tLXLHkcUa5wDIIWIv2oQgRqJriG41IDIMqhW7M', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=108&crop=smart&auto=webp&s=a86150255bfb5838455f9606d84eac08af01d142', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=216&crop=smart&auto=webp&s=80dc75d48721317b59d9c543e52b23e5f55713fe', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=320&crop=smart&auto=webp&s=11c92eacc16c0a5f9e8d77dd0dc26c05f8439717', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=640&crop=smart&auto=webp&s=c47a9c1cd7cd7f716e71f6e9161de09e2cf497a4', 'width': 640}, {'height': 438, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=960&crop=smart&auto=webp&s=85ee0792a38453efc043532303e359f3fd514a21', 'width': 960}, {'height': 492, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?width=1080&crop=smart&auto=webp&s=37a04df3b02e961abae2e7a05a27a40c7b7c8995', 'width': 1080}], 'source': {'height': 1278, 'url': 'https://preview.redd.it/x9zbqai7flse1.png?auto=webp&s=79f0185fcfa4b2bf47e20256b483aaa0febeefcf', 'width': 2800}, 'variants': {}}]}
|
|||
I Built a Game Where AI Tries to Fool You
| 1 |
[removed]
| 2025-04-03T10:29:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqewm0/i_built_a_game_where_ai_tries_to_fool_you/
|
Recent-Committee-186
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqewm0
| false | null |
t3_1jqewm0
|
/r/LocalLLaMA/comments/1jqewm0/i_built_a_game_where_ai_tries_to_fool_you/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'zlCZq2nIgQnN24ORqcZhQ7HCPGuVIKNEOA3BDMOnk-Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=108&crop=smart&auto=webp&s=1b1223c502224f3346200563d28c5e1ba415fa20', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=216&crop=smart&auto=webp&s=6f91b52c3a69f8e14999f05531f093a502d63223', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=320&crop=smart&auto=webp&s=3210141faf166ef7effee3fbd30f678de4ae7cdf', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=640&crop=smart&auto=webp&s=5c630599ca56755252db1f22a4f68e5630ad4d6c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=960&crop=smart&auto=webp&s=e64838159ab4e07092997c9fc46c90b4b154fef6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?width=1080&crop=smart&auto=webp&s=385585ad21bf0cebad659172817987318579c56d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MJWrJg2Myg2i6mi_jMPmO2bBsZhe6269xV01N83xBCc.jpg?auto=webp&s=5abcdb944d88a9cb435db8a1a889fa3b8bbd6d8d', 'width': 1200}, 'variants': {}}]}
|
Need help from RAM giant to create whisper tflite model
| 6 |
I have developed a local Android input method based on Whisper which is available on F-Droid (https://f-droid.org/de/packages/org.woheller69.whisper/). I would like to improve the tflite model but the creation seems to require about 96GB of RAM (in the end the model has around 100MB...)
Maybe one of the RAM giants from here, who knows how to run a Colab with local runtime, wants to help?
[https://github.com/woheller69/whisperIME/issues/71](https://github.com/woheller69/whisperIME/issues/71)
| 2025-04-03T10:35:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqezs6/need_help_from_ram_giant_to_create_whisper_tflite/
|
DocWolle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqezs6
| false | null |
t3_1jqezs6
|
/r/LocalLLaMA/comments/1jqezs6/need_help_from_ram_giant_to_create_whisper_tflite/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': 'py2oEetDqe6GxETL393Wd0wGu8iH-W39KrMSryoFj1s', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QXTo5cD4zLqM8Ka85GXEsrmBiCslCFbZBeZpgYR98Zs.jpg?width=108&crop=smart&auto=webp&s=0c3e8f27023a2f02871e78b1690f112e55ca6b96', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/QXTo5cD4zLqM8Ka85GXEsrmBiCslCFbZBeZpgYR98Zs.jpg?width=216&crop=smart&auto=webp&s=d4a6e2c6e8d24cbc14774603533215b548bba4a3', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/QXTo5cD4zLqM8Ka85GXEsrmBiCslCFbZBeZpgYR98Zs.jpg?width=320&crop=smart&auto=webp&s=f5a01301367e71d09e10e51a97a8a5927949448b', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/QXTo5cD4zLqM8Ka85GXEsrmBiCslCFbZBeZpgYR98Zs.jpg?auto=webp&s=b901606a4d802204e3172b277a7c28a54f3fe267', 'width': 512}, 'variants': {}}]}
|
Best Self-Hosted Models for Extracting Data from Invoices & Statements?
| 3 |
I’m planning to self-host local models and would love some suggestions on which models to use and their GPU requirements.
My use case is straightforward: I need a high-performing model that can extract data from invoices and bank statements. I’ve already built an MVP using Mistral Small 3.1 24B and GPT-4o via OpenRouter, and both perform well. However, I want to avoid sending sensitive financial documents to third-party APIs, so I’m looking to self-host a model instead.
What models would you recommend for this task, and what are their GPU requirements? Any insights or experiences would be greatly appreciated!
| 2025-04-03T10:53:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqfatm/best_selfhosted_models_for_extracting_data_from/
|
IamDJoker07
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqfatm
| false | null |
t3_1jqfatm
|
/r/LocalLLaMA/comments/1jqfatm/best_selfhosted_models_for_extracting_data_from/
| false | false |
self
| 3 | null |
Gemma 3 Reasoning Finetune for Creative, Scientific, and Coding
| 157 | 2025-04-03T11:13:31 |
https://huggingface.co/Tesslate/Synthia-S1-27b
|
United-Rush4073
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqfnmh
| false | null |
t3_1jqfnmh
|
/r/LocalLLaMA/comments/1jqfnmh/gemma_3_reasoning_finetune_for_creative/
| false | false | 157 |
{'enabled': False, 'images': [{'id': 'NAY8hjdfkfYWKrdt3PQMv0uEn0lXBAXngaSAhIvbj84', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=108&crop=smart&auto=webp&s=5026137f8f4a80be416855d96df25a998b31ec93', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=216&crop=smart&auto=webp&s=9460c7e8e947946c38a6c3f390754d81b74856c7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=320&crop=smart&auto=webp&s=d9957d475f0179d27b7500c6e0d1f34f589d69b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=640&crop=smart&auto=webp&s=7b350abd537aa130a3efa570e482b7dc669f3843', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=960&crop=smart&auto=webp&s=e030c06473050f387c6c0fa443111f56fa9428c8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?width=1080&crop=smart&auto=webp&s=63919db9a4088df84a71b4873d95c51c48e32469', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KfSCldbxB1IZkgQJUxzx3I66OmXlBpPyIYpJisUrymw.jpg?auto=webp&s=52dbf33fc80c1d3369e3e99cce438f88ac6fd77f', 'width': 1200}, 'variants': {}}]}
|
||
Continuing on the topic of GPU sounds, are those GPU sounds normal? /s
| 1 | 2025-04-03T11:32:44 |
https://www.youtube.com/watch?v=16SaovQkOpY
|
its5Q
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqg0ce
| false |
{'oembed': {'author_name': 'v', 'author_url': 'https://www.youtube.com/@vwvvvww', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/16SaovQkOpY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Hey guys, is this normal for my GPU to sound like this?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/16SaovQkOpY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Hey guys, is this normal for my GPU to sound like this?', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jqg0ce
|
/r/LocalLLaMA/comments/1jqg0ce/continuing_on_the_topic_of_gpu_sounds_are_those/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'g_gjckAy6H6CUWj4mtPLp90nxJTrDxQUW_7DwzPEs3I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/CKkgFKrdb474YFHGKRngDtwTEwzQ6lcoT346IxcL9xk.jpg?width=108&crop=smart&auto=webp&s=9fe761829932302b45b29697ac446c4cee2a7b1c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/CKkgFKrdb474YFHGKRngDtwTEwzQ6lcoT346IxcL9xk.jpg?width=216&crop=smart&auto=webp&s=d972e7dd3d448262c913187ef77bb73442bcaad3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/CKkgFKrdb474YFHGKRngDtwTEwzQ6lcoT346IxcL9xk.jpg?width=320&crop=smart&auto=webp&s=04dd6487120d770e44da721b7a94a83e294c2fdd', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/CKkgFKrdb474YFHGKRngDtwTEwzQ6lcoT346IxcL9xk.jpg?auto=webp&s=809c8f421d296337bfdd3783047ba409ff73699e', 'width': 480}, 'variants': {}}]}
|
||
Tell me the best cloud provider that is best for finetuning
| 0 |
I need to fine-tune all types of SLMs (Small Language Models) for a variety of tasks. Tell me the best cloud provider that is overall the best.
| 2025-04-03T12:11:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqgs06/tell_me_the_best_cloud_provider_that_is_best_for/
|
WriedGuy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqgs06
| false | null |
t3_1jqgs06
|
/r/LocalLLaMA/comments/1jqgs06/tell_me_the_best_cloud_provider_that_is_best_for/
| false | false |
self
| 0 | null |
I suspect many of you are trying to find out...
| 1 | 2025-04-03T12:28:44 |
Thrumpwart
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqh4gf
| false | null |
t3_1jqh4gf
|
/r/LocalLLaMA/comments/1jqh4gf/i_suspect_many_of_you_are_trying_to_find_out/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '2yqAvH5kA55ymMyX-rlQXbUWdPxM5HuCViba7-WbKuY', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?width=108&crop=smart&auto=webp&s=e93aa87914af9a2e9a0443e8eb80ee7d6f17c9ed', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?width=216&crop=smart&auto=webp&s=68563a2bbbcda7c1d56be8c9451a67c28fee319b', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?width=320&crop=smart&auto=webp&s=c0ddcf215f087696d8c6a85a1183b7dfd285f669', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?width=640&crop=smart&auto=webp&s=c74bead7766ae77a95612981f9971ef6b9a23e4d', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?width=960&crop=smart&auto=webp&s=9f252174ebaed884dc0aeb4c6204b517989d9c35', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/eo8ind9r5mse1.jpeg?auto=webp&s=e897f432daeefacc1ec5a876eca7956afd84516c', 'width': 1024}, 'variants': {}}]}
|
|||
Personal experience with local&commercial LLM's
| 22 |
I have the luxury of having 2x 3090's at home and access to MS Copilot / 4o / 4o-mini at work. I've used a load of models extensively the past couple of months; regarding the non-reasoning models, I value the models as follows;
**--10B +-**
* *Not really intelligent, makes lots of basic mistakes*
* *Doesn't follow instructions to the letter* *However, really good at "vibe check"*
* *Writing text that sounds good*
\#1 Mistral Nemo
**--30B +-**
* *Semi intelligent, can follow basic tasks without major mistakes For example, here's a list of people+phone number, and another list of people+address, combine the lists, give the phone and address of each person*
* *Very fast generation speed*
\#3 Mistral Small
\#2 Qwen2.5B 32B
\#1 4o-mini
**--70B +-**
* Follows more complex tasks without major mistakes
* Trade-off: lower generation speed
\#3 Llama3.3 70B
\#2 4o / Copilot, considering how much these costs in corporate settings, their performance is really disappointing
\#1 Qwen2.5 72B
**--Even better;**
* Follows even more complex tasks without mistakes
\#4 DeepSeek V3
\#3 Gemini models
\#2 Sonnet 3.7; I actually prefer 3.5 to this
\#1 DeepSeek V3 0324
**--Peak**
\#1 Sonnet 3.5
I think the picture is clear, basically, for a complex coding / data task I would confidently let Sonnet 3.5 do its job and return after a couple of minutes expecting a near perfect output.
DeepSeekV3 would need 2 iterations +-. A note here is that I think DS V3 0324 would suffice for 99% of the cases, but it's less usable due to timeouts / low generation speed. Gemini is a good, fast and cheap tradeoff.
70B models, probably 5 back and forths
For the 30B models even more, and probably I'll have to invest some thinking in order to simplify the problem so the LLM can solve it.
| 2025-04-03T12:43:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqhff1/personal_experience_with_localcommercial_llms/
|
zoom3913
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqhff1
| false | null |
t3_1jqhff1
|
/r/LocalLLaMA/comments/1jqhff1/personal_experience_with_localcommercial_llms/
| false | false |
self
| 22 | null |
Confused with Too Many LLM Benchmarks, What Actually Matters Now?
| 72 |
Trying to make sense of the constant benchmarks for new LLM advancements in 2025.
Since the early days of GPT‑3.5, we've witnessed countless benchmarks and competitions — MMLU, HumanEval, GSM8K, HellaSwag, MLPerf, GLUE, etc.—and it's getting overwhelming .
I'm curious, so its the perfect time to ask the reddit folks:
1. What’s your go-to benchmark?
2. How do you stay updated on benchmark trends?
3. **What Really Matters**
4. Your take on benchmarking in general
I guess my question could be summarized to what genuinely indicate better performance vs. hype?
**feel free to share your thoughts, experiences or HOT Takes.**
| 2025-04-03T12:45:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqhgzq/confused_with_too_many_llm_benchmarks_what/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqhgzq
| false | null |
t3_1jqhgzq
|
/r/LocalLLaMA/comments/1jqhgzq/confused_with_too_many_llm_benchmarks_what/
| false | false |
self
| 72 | null |
2x rtx 5070 vs 1x rtx 5080
| 8 |
Hi All!
I’m trying to decide between 2x rtx 5070 (approx $1100 msrp total) or 1x rtx 5080.
I currently have a gtx 1080, which I believe I could still use in conjunction with both of these.
Other important specs:
CPU: i9 14900k
RAM: 32x2 + 16x2 ddr5. Still trying to get stability with all 4 sticks, so just using 32x2 for now
PSU wattage: 1250W
Workloads (proxmox):
- standard home automation stuff (home assistant, wireguard, pihole, etc)
- gaming vm (windows) with gpu pass through
- openwebui/ollama (currently running on cpu/ram)
Usage: I’m an ML developer, so this is more of a homelab/experimentation setup than a gaming setup, though I would like the ability to game via vm (ex: baldurs gate, don’t need the max settings on all games).
What do you all think?
| 2025-04-03T13:21:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqiajy/2x_rtx_5070_vs_1x_rtx_5080/
|
thosehippos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqiajy
| false | null |
t3_1jqiajy
|
/r/LocalLLaMA/comments/1jqiajy/2x_rtx_5070_vs_1x_rtx_5080/
| false | false |
self
| 8 | null |
Oh this is good , manus on hugging face
| 0 | 2025-04-03T13:24:21 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqictx
| false | null |
t3_1jqictx
|
/r/LocalLLaMA/comments/1jqictx/oh_this_is_good_manus_on_hugging_face/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'NAjqVR18_jQZYwm0R1TmkGbLBFUUjgh6FQyGi5nS1lc', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=108&crop=smart&auto=webp&s=6d3e83f7ade388106d7cf18c45ddcd396cc1955c', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=216&crop=smart&auto=webp&s=858a9366dd977d1e097fcbb2bd514dfd67ee6467', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=320&crop=smart&auto=webp&s=9274a3f7d628f9bed7ebaedf3e339d2c0be0e054', 'width': 320}, {'height': 801, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=640&crop=smart&auto=webp&s=4f675843bf8fe5bb17924610b392033ac3486d43', 'width': 640}, {'height': 1201, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=960&crop=smart&auto=webp&s=6d12aea737da6bb8397fdd4486981a6528a75c82', 'width': 960}, {'height': 1352, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?width=1080&crop=smart&auto=webp&s=159b8e89127076ef93d38c557afdc20617f40ae2', 'width': 1080}], 'source': {'height': 1352, 'url': 'https://preview.redd.it/1hb8gq6ofmse1.jpeg?auto=webp&s=8b2cead48252b9841283497d2961db6aedb871f5', 'width': 1080}, 'variants': {}}]}
|
|||
How exactly to run MCP servers via local LLM
| 6 |
IDK the exact terminology or if its possible but in the way that claude's functionality can be extended with MCP servers, is there a way to use other LLMs say google Gemini 2.5 pro (or the local Gemma models) and the MCP servers from smithery etc, to extend the capabilities of local/open source models? that would truly be amazing
| 2025-04-03T13:35:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqim2d/how_exactly_to_run_mcp_servers_via_local_llm/
|
sandwich_stevens
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqim2d
| false | null |
t3_1jqim2d
|
/r/LocalLLaMA/comments/1jqim2d/how_exactly_to_run_mcp_servers_via_local_llm/
| false | false |
self
| 6 | null |
Security vulnerabilities with Ryzen AI / NPU CPUs
| 48 |
There are a bunch of recent [security issues](https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7037.html) in the driver for the NPU, as well as related software. Basically, a malicious AI model could install malware on the local machine when executed via NPU. If the developer SDK is also installed when it could even easily get administrator permissions despite running via restricted account.
There's a [software update](https://ryzenai.docs.amd.com/en/latest/inst.html) available where the issues have been fixed, but for downloading it you need to [log in](https://account.amd.com/en/forms/downloads/amd-end-user-license-xef.html?filename=NPU_RAI1.4_GA_257_WHQL.zip) first. Basic drivers for your hardware should be freely accessible, especially when it's about security updates, and not kept behind a log in wall.
| 2025-04-03T13:38:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqip5f/security_vulnerabilities_with_ryzen_ai_npu_cpus/
|
Chromix_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqip5f
| false | null |
t3_1jqip5f
|
/r/LocalLLaMA/comments/1jqip5f/security_vulnerabilities_with_ryzen_ai_npu_cpus/
| false | false |
self
| 48 |
{'enabled': False, 'images': [{'id': 'VY-gY2HYKYcXNyATN6eFOZXOvhwsQtwwRY3tWSlZJMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=108&crop=smart&auto=webp&s=bf9ed3573a3db5d3e44a72830f8426517a91377c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=216&crop=smart&auto=webp&s=f524b797a7ea6865e919d4617ae1dcf2b6c6a2af', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=320&crop=smart&auto=webp&s=10a67c110f3a769d2bfe608f2b3a86f3166e246f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=640&crop=smart&auto=webp&s=55af9887767b7ab9df9c7ca842d03265592ce4ea', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=960&crop=smart&auto=webp&s=d292a516385f47874c32d38cd8df66e9fb7ad712', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=1080&crop=smart&auto=webp&s=5b59e2297e5600bee5b6085195e17dda1c43324f', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?auto=webp&s=c9e47dc7ead939c14e8582ec6e213ea7b7903190', 'width': 1200}, 'variants': {}}]}
|
Fully Featured AI Coding Agent as MCP Server (or for local model)
| 53 |
We've been working like hell on this one: a fully capable Agent, as good or better than Windsurf's Cascade, Claude Code or Cursor's agent - but can be used for free.
It can run as an MCP server, so you can use it for free with Claude Desktop, and it can still fully understand a code base, even a very large one. We did this by using a language server instead of RAG to analyze code.
Can also run it on any model, including local ones.
Check it out, super easy to run, GPL license:
[https://github.com/oraios/serena](https://github.com/oraios/serena)
| 2025-04-03T14:02:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqj9a7/fully_featured_ai_coding_agent_as_mcp_server_or/
|
Left-Orange2267
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqj9a7
| false | null |
t3_1jqj9a7
|
/r/LocalLLaMA/comments/1jqj9a7/fully_featured_ai_coding_agent_as_mcp_server_or/
| false | false |
self
| 53 |
{'enabled': False, 'images': [{'id': 'FYRoTHxR0XuuV2FSWeTmwyNaYAm1XxhUw8QPmGP6eYY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=108&crop=smart&auto=webp&s=7c8335c0c0ca400458811aca553d69081f7c091d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=216&crop=smart&auto=webp&s=2779d0c1d0a7e0893ab2c3c0f75dacba66c27168', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=320&crop=smart&auto=webp&s=4ed89d376de4b39bce14bc122282a614397405d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=640&crop=smart&auto=webp&s=d6d72f4ad2eebf47013a6e51245ed44897f25277', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=960&crop=smart&auto=webp&s=d07af7387e05f2038e9b016d9e3325aeeeec1eb5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?width=1080&crop=smart&auto=webp&s=fd89a30dd2e36448ca5a101b960000271f962899', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jzL8oOQvzcOOIZL9RVXgUkCOeuVzrGCFQiwOBN5k4BE.jpg?auto=webp&s=c3a153404894ddb84108e299a0e66b29acf7b0a1', 'width': 1200}, 'variants': {}}]}
|
Generating multiple prompts and fusing them into one is the best way of improving responses by increasing inference time - do you think we'll see CoT going to local models?
| 0 | 2025-04-03T14:21:21 |
juanviera23
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqjq8q
| false | null |
t3_1jqjq8q
|
/r/LocalLLaMA/comments/1jqjq8q/generating_multiple_prompts_and_fusing_them_into/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'mz3uzaVA-NRfWqFO20uFXfPrVf8lnKvLBq7AAnHSjp4', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=108&crop=smart&auto=webp&s=3595923bbfbd7d9d7979bf91d0021579fb17d8b8', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=216&crop=smart&auto=webp&s=c7de75ecae79b777e9177bc27afb15b566c0f752', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=320&crop=smart&auto=webp&s=942f51272d9688ac519f2b35a9e64d424ca8c221', 'width': 320}, {'height': 256, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=640&crop=smart&auto=webp&s=4d28665b5ffd4c1c26b209505abf98e5698d6421', 'width': 640}, {'height': 385, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=960&crop=smart&auto=webp&s=291e50f353e5d3689676d0de4c8a48e3e2c883ce', 'width': 960}, {'height': 433, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?width=1080&crop=smart&auto=webp&s=5a7e3b04df5fb361e8eaf8afaaa833ff0c64c0ca', 'width': 1080}], 'source': {'height': 1060, 'url': 'https://preview.redd.it/kv9vibjnpmse1.jpeg?auto=webp&s=009b0763312ec06e810df7684bcd7b324bc1197b', 'width': 2640}, 'variants': {}}]}
|
|||
Your favourite selfhosted and Proprietary LLM list
| 0 |
1.Meta-Llama-3-70B
2.amazon-nova-lite
3.o1-mini-08-17
| 2025-04-03T14:30:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqjych/your_favourite_selfhosted_and_proprietary_llm_list/
|
emptybrain22
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqjych
| false | null |
t3_1jqjych
|
/r/LocalLLaMA/comments/1jqjych/your_favourite_selfhosted_and_proprietary_llm_list/
| false | false |
self
| 0 | null |
vLLM - Kaggle 2 T4 GPU - How to deploy models on different gpus?
| 1 |
[removed]
| 2025-04-03T14:50:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqkgru/vllm_kaggle_2_t4_gpu_how_to_deploy_models_on/
|
reitnos
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqkgru
| false | null |
t3_1jqkgru
|
/r/LocalLLaMA/comments/1jqkgru/vllm_kaggle_2_t4_gpu_how_to_deploy_models_on/
| false | false |
self
| 1 | null |
Understanding Quantization Labels: How to Assign Them?
| 0 |
I am new to quantization and trying to understand how to decide quantization labels for a model. How do you determine the appropriate quantization labels for specific model layers? What factors should I consider when assigning quantization labels?
What I knew by far:
1. GGUF - It can quantize the model for inference. But don't know how to do this for video-text-to-text model. By far llama.cpp is only for llama based models.
| 2025-04-03T15:01:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqkqj3/understanding_quantization_labels_how_to_assign/
|
Himanshu40-c
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqkqj3
| false | null |
t3_1jqkqj3
|
/r/LocalLLaMA/comments/1jqkqj3/understanding_quantization_labels_how_to_assign/
| false | false |
self
| 0 | null |
Sales transcript training
| 1 |
[removed]
| 2025-04-03T15:09:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqky45/sales_transcript_training/
|
agentsmith444
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqky45
| false | null |
t3_1jqky45
|
/r/LocalLLaMA/comments/1jqky45/sales_transcript_training/
| false | false |
self
| 1 | null |
Help with awq
| 1 |
Im sorry if this has been answered here
Im actually trying to use Gemma3-27b but I want the awq version
Is there any way to convert a model to awq version without loading it in memory? My real issue is that I don't have much ram and I'm trying to work on models like gemma3-27b, qwen-72b
A little info
I have tried qwen2.5-32b-awq
And it fills the memory with the device I have
And i wanted to use a larger model in hopes that the quality of output will increase
| 2025-04-03T15:12:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jql1bl/help_with_awq/
|
bjain1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jql1bl
| false | null |
t3_1jql1bl
|
/r/LocalLLaMA/comments/1jql1bl/help_with_awq/
| false | false |
self
| 1 | null |
Does anyone else kinda love the coil whine noise as the LLM spins up?
| 41 |
The first time I heard the faint screech as a model started doing its thing, I was afraid my GPU was fucked up... a year later, I've come to almost see it as the dial up modem tone of yesteryear - a small sound that let me know good things are coming in just a moment! Seems like every model has its own little song, and the tones during inference on a Mac are very different than the ones I get out of my nvidia GPUs. It makes me weirdly nostalgic, and now it's almost a comforting indicator that things are working rather than a warning flag.
| 2025-04-03T15:16:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jql4ia/does_anyone_else_kinda_love_the_coil_whine_noise/
|
taylorwilsdon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jql4ia
| false | null |
t3_1jql4ia
|
/r/LocalLLaMA/comments/1jql4ia/does_anyone_else_kinda_love_the_coil_whine_noise/
| false | false |
self
| 41 | null |
kv cache quants in llamacpp, 5_1 and 5_0
| 4 |
Has anyone tested the performance of 5\_1 and 5\_0 kv cache quants in llamacpp?
I had seen some tests that showed using K cache 4\_0 quants substantially decreased performance in certain models, and 8\_0 is recommended. I am wondering if anyone has experienced with 5\_1 and 5\_0 quants for kv cache.
| 2025-04-03T15:22:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqla4i/kv_cache_quants_in_llamacpp_5_1_and_5_0/
|
EasternBeyond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqla4i
| false | null |
t3_1jqla4i
|
/r/LocalLLaMA/comments/1jqla4i/kv_cache_quants_in_llamacpp_5_1_and_5_0/
| false | false |
self
| 4 | null |
What are you guys waiting for in the AI world this month?
| 135 |
For me, it’s:
* **Llama 4**
* **Qwen 3**
* **DeepSeek R2**
* **Gemini 2.5 Flash**
* **Mistral’s new model**
* **Diffusion LLM model API on OpenRouter**
| 2025-04-03T15:33:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqlkfp/what_are_you_guys_waiting_for_in_the_ai_world/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqlkfp
| false | null |
t3_1jqlkfp
|
/r/LocalLLaMA/comments/1jqlkfp/what_are_you_guys_waiting_for_in_the_ai_world/
| false | false |
self
| 135 | null |
What can I use to test information extraction (ideally locally) on a laptop?
| 1 |
I've multiple thousands of documents with information inside (HTML / Text / PDF) and would need to extract specific information (event details).
Since it is for a hobby project, I'm wondering whether there is anything available, which would perform ok in terms of accurate information extraction of 60 - 80% of events in those documents, while running locally / on cheap hardware?
It does not have to be fast at all.
I'd like to test around on my laptop and if I see any acceptable results, deploy it onto a VPS or a desktop PC with a GPU or similar to just run it at home.
And if there are any models that I should check out, do you have a hint on how to work with it as well?
Ideally, it would be (for testing at least) not a Python solution but some sort of UI.
And if something looks promising, I could build a bit of Python code around it as well.
| 2025-04-03T15:53:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqm2o3/what_can_i_use_to_test_information_extraction/
|
Chris8080
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqm2o3
| false | null |
t3_1jqm2o3
|
/r/LocalLLaMA/comments/1jqm2o3/what_can_i_use_to_test_information_extraction/
| false | false |
self
| 1 | null |
Need Advice on API Key Management with Ollama & Terms of Service
| 1 |
[removed]
| 2025-04-03T16:05:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqmdzs/need_advice_on_api_key_management_with_ollama/
|
Lazy-Dragonfly7825
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqmdzs
| false | null |
t3_1jqmdzs
|
/r/LocalLLaMA/comments/1jqmdzs/need_advice_on_api_key_management_with_ollama/
| false | false |
self
| 1 | null |
Deploy your own ChatGPT Operator on macOS
| 0 |
We've heard your feedback that setting up Computer-Use Agents can be challenging! Today we're launching a new blogpost series to help. First up: Build Your Own Operator on macOS
A step-by-step guide to pairing OpenAI's computer-use-preview model with a macOS VM sandbox.
Why build your own instead of using ChatGPT's Operator?
\- Control native macOS apps, not just web
\- Better privacy with local VMs
\- Full access to system-level operations
\- Superior performance on your hardware
Our guide covers everything you need:
\- VM setup with Lume CLI
\- Connecting to OpenAI's model
\- Building the action loop
\- Complete working Python code
[https://www.trycua.com/blog/build-your-own-operator-on-macos-1](https://www.trycua.com/blog/build-your-own-operator-on-macos-1)
| 2025-04-03T16:06:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqmeys/deploy_your_own_chatgpt_operator_on_macos/
|
Pretend-Map7430
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqmeys
| false | null |
t3_1jqmeys
|
/r/LocalLLaMA/comments/1jqmeys/deploy_your_own_chatgpt_operator_on_macos/
| false | false |
self
| 0 | null |
LocalScore - Local LLM Benchmark
| 26 |
I'm excited to share [LocalScore](https://localscore.ai) with y'all today. I love local AI and have been writing a local LLM benchmark over the past few months. It's aimed at being a helpful resource for the community in regards to how different GPU's perform on different models.
You can download it and give it a try here: https://localscore.ai/download
The code for both the benchmarking client and the website are both open source. This was very intentional so together we can make a great resrouce for the community through community feedback and contributions.
Overall the benchmarking client is pretty simple. I chose a set of tests which hopefully are fairly representative of how people will be using LLM's locally. Each test is a combination of different prompt and text generation lengths. We definitely will be taking community feedback to make the tests even better. It runs through these tests measuring:
1. Prompt processing speed (tokens/sec)
2. Generation speed (tokens/sec)
3. Time to first token (ms)
We then combine these three metrics into a single score called the LocalScore. The website is a database of results from the benchmark, allowing you to explore the performance of different models and hardware configurations.
Right now we are only supporting single GPUs for submitting results. You can have multiple GPUs but LocalScore will only run on the one of your choosing. Personally I am skeptical of the long term viability of multi GPU setups for local AI, similar to how gaming has settled into single GPU setups. However, if this is something you really want, open a GitHub discussion so we can figure out the best way to support it!
Give it a try! I would love to hear any feedback or contributions!
If you want to learn more, here are some links:
- Website: https://localscore.ai
- Demo video: https://youtu.be/De6pA1bQsHU
- Blog post: https://localscore.ai/blog
- CLI Github: https://github.com/Mozilla-Ocho/llamafile/tree/main/localscore
- Website Github: https://github.com/cjpais/localscore
| 2025-04-03T16:34:29 |
https://localscore.ai/
|
sipjca
|
localscore.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqn570
| false | null |
t3_1jqn570
|
/r/LocalLLaMA/comments/1jqn570/localscore_local_llm_benchmark/
| false | false |
default
| 26 | null |
Official Gemma 3 QAT checkpoints (3x less memory for ~same performance)
| 518 |
Hi all! We got new official checkpoints from the Gemma team.
Today we're releasing quantization-aware trained checkpoints. This allows you to use q4\_0 while retaining much better quality compared to a naive quant. You can go and use this model with llama.cpp today!
We worked with the llama.cpp and Hugging Face teams to validate the quality and performance of the models, as well as ensuring we can use the model for vision input as well. Enjoy!
Models: [https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b](https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b)
| 2025-04-03T16:54:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jqnnfp/official_gemma_3_qat_checkpoints_3x_less_memory/
|
hackerllama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqnnfp
| false | null |
t3_1jqnnfp
|
/r/LocalLLaMA/comments/1jqnnfp/official_gemma_3_qat_checkpoints_3x_less_memory/
| false | false |
self
| 518 |
{'enabled': False, 'images': [{'id': 'Zl_4xsdRVBYVAYoSKCo_9lctBcVwVnWLwYR7Bz_8Peg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=108&crop=smart&auto=webp&s=042b37bd3c7281f22302eb98db38f13d1ccb8ad1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=216&crop=smart&auto=webp&s=ed9f2a763a8a5e8b91bf44415984c4a01d228442', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=320&crop=smart&auto=webp&s=3488d76e2ca0f912e88f039740f8023d77acf9b8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=640&crop=smart&auto=webp&s=76af7c6f7f705eee53c593494d93c7e7a4a2d821', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=960&crop=smart&auto=webp&s=ee365bb7a518bc6f3bf8e85f680d528673f63e4a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=1080&crop=smart&auto=webp&s=8f6004d64fa63205b34d0a2823fc154738056ecc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?auto=webp&s=bd0ff338f4b45f68d5e56c717c67b369298b248c', 'width': 1200}, 'variants': {}}]}
|
Google released Gemma 3 QAT, is this going to be better than Bartowski's stuff
| 118 | 2025-04-03T17:14:48 |
https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b
|
AryanEmbered
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jqo71p
| false | null |
t3_1jqo71p
|
/r/LocalLLaMA/comments/1jqo71p/google_released_gemma_3_qat_is_this_going_to_be/
| false | false | 118 |
{'enabled': False, 'images': [{'id': 'Zl_4xsdRVBYVAYoSKCo_9lctBcVwVnWLwYR7Bz_8Peg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=108&crop=smart&auto=webp&s=042b37bd3c7281f22302eb98db38f13d1ccb8ad1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=216&crop=smart&auto=webp&s=ed9f2a763a8a5e8b91bf44415984c4a01d228442', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=320&crop=smart&auto=webp&s=3488d76e2ca0f912e88f039740f8023d77acf9b8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=640&crop=smart&auto=webp&s=76af7c6f7f705eee53c593494d93c7e7a4a2d821', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=960&crop=smart&auto=webp&s=ee365bb7a518bc6f3bf8e85f680d528673f63e4a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?width=1080&crop=smart&auto=webp&s=8f6004d64fa63205b34d0a2823fc154738056ecc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JFjSsjeahuvIaxPX6j1v08RlgEsxBGES9Cq1BjvJRK0.jpg?auto=webp&s=bd0ff338f4b45f68d5e56c717c67b369298b248c', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.