title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
My MI50 32g Cannot be Detected by ROCM
1
[removed]
2025-04-22T17:39:48
https://www.reddit.com/r/LocalLLaMA/comments/1k5cfbg/my_mi50_32g_cannot_be_detected_by_rocm/
MedicalTangerine191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5cfbg
false
null
t3_1k5cfbg
/r/LocalLLaMA/comments/1k5cfbg/my_mi50_32g_cannot_be_detected_by_rocm/
false
false
https://b.thumbs.redditm…wEGpCjQ4xK3o.jpg
1
null
Better ways to extract structured data from distinct sections within single PDFs using Vision LLMs?
3
Hi everyone, I'm building a tool to extract structured data from PDFs using Vision-enabled LLMs. My current workflow is: 1. User uploads a PDF. 2. The PDF is encoded to base64. 3. For each of \~50 predefined fields, I send the base64 PDF + a prompt to the LLM. 4. The prompt asks the LLM to extract the specific field's value and return it in a predefined JSON template, guided by a schema JSON that defines data types, etc. The challenge arises when a single PDF contains information related to multiple distinct *subjects* or *sections* (e.g., different products, regions, or topics described sequentially in one document). My goal is to generate separate structured JSON outputs, one for each distinct subject/section within that single PDF. My current workaround is inefficient: I run the entire process multiple times on the same PDF. For each run, I add an instruction to the prompt for every field query, telling the LLM to focus *only* on one specific section (e.g., "Focus only on Section A"). This relies heavily on the LLM's instruction-following for every query and requires processing the same PDF repeatedly. Is there a better way to handle this? Should I OCR first? THANKS!
2025-04-22T17:46:43
https://www.reddit.com/r/LocalLLaMA/comments/1k5clg0/better_ways_to_extract_structured_data_from/
siddhantparadox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5clg0
false
null
t3_1k5clg0
/r/LocalLLaMA/comments/1k5clg0/better_ways_to_extract_structured_data_from/
false
false
self
3
null
An LMM for LLMs
1
[removed]
2025-04-22T18:06:14
[deleted]
1970-01-01T00:00:00
0
{}
1k5d3j3
false
null
t3_1k5d3j3
/r/LocalLLaMA/comments/1k5d3j3/an_lmm_for_llms/
false
false
default
1
null
Intern team may be our next AllenAI
54
They are open sourcing the SFT data they used for their SOTA InternVL3 models, very exciting!
2025-04-22T18:19:19
https://huggingface.co/datasets/OpenGVLab/InternVL-Data
random-tomato
huggingface.co
1970-01-01T00:00:00
0
{}
1k5df6x
false
null
t3_1k5df6x
/r/LocalLLaMA/comments/1k5df6x/intern_team_may_be_our_next_allenai/
false
false
https://b.thumbs.redditm…CaIw5fnRpEWU.jpg
54
{'enabled': False, 'images': [{'id': '-RIXF32vU81Xxh_ws8WMQ12f8PlRk-WikPXQ4qgSVjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=108&crop=smart&auto=webp&s=34467462330a9c2925d9970eab7a4e3f7b9c59c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=216&crop=smart&auto=webp&s=067656b3776936775f7431c84f93d4a4b3e9e197', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=320&crop=smart&auto=webp&s=076711b4341f0bb48aacb776446294be8651bb76', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=640&crop=smart&auto=webp&s=e725417fe07d7d77dbb30e48bcd7eff07157a854', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=960&crop=smart&auto=webp&s=973535eba35ba65cb3d83a69ce26f8d7d30c8d15', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?width=1080&crop=smart&auto=webp&s=d3778ddf85d3d4a09bdac02741461fd331a7ccaa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D-xFwymPmTVU-AzxqPdnr1BEQsSvHPFOfpeK0yI1G4M.jpg?auto=webp&s=52b211aee0c74de7e7e8c5c56b47e2f660da0f50', 'width': 1200}, 'variants': {}}]}
AiI Agent Development Help!
1
[removed]
2025-04-22T18:24:42
https://www.reddit.com/r/LocalLLaMA/comments/1k5dk02/aii_agent_development_help/
Future-Structure-296
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5dk02
false
null
t3_1k5dk02
/r/LocalLLaMA/comments/1k5dk02/aii_agent_development_help/
false
false
self
1
null
AI Agent
1
[removed]
2025-04-22T18:26:11
https://www.reddit.com/r/LocalLLaMA/comments/1k5dlhc/ai_agent/
Future-Structure-296
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5dlhc
false
null
t3_1k5dlhc
/r/LocalLLaMA/comments/1k5dlhc/ai_agent/
false
false
self
1
null
Has anyone else had their model rebel?
1
[removed]
2025-04-22T18:30:46
https://www.reddit.com/r/LocalLLaMA/comments/1k5dpol/has_anyone_else_had_their_model_rebel/
OrthogonalToHumanity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5dpol
false
null
t3_1k5dpol
/r/LocalLLaMA/comments/1k5dpol/has_anyone_else_had_their_model_rebel/
false
false
self
1
null
Made a Lightweight Recreation of OS1/Samantha from the movie Her running locally in the browser via transformers.js
211
2025-04-22T18:32:18
https://v.redd.it/rejlhgvpjfwe1
ajunior7
v.redd.it
1970-01-01T00:00:00
0
{}
1k5dr2y
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rejlhgvpjfwe1/DASHPlaylist.mpd?a=1747938758%2CNjc4ZjRhZjMwMGY5ZjU4Y2U5YTgyNDIwMTJlOWIwOTk2NTBmYTViZTI1MjBmZWNlOTgzNGI0ZGRlMzI0ZjkyZA%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/rejlhgvpjfwe1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/rejlhgvpjfwe1/HLSPlaylist.m3u8?a=1747938758%2CNWEwYTExYzVjYTQ4ODE3MTM2MWI4M2ZmZDU4NjhjZjUwN2QzNTc1MzZmYjYxNjg0ODMzOTAwZTE2NGZlMzJiNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rejlhgvpjfwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1k5dr2y
/r/LocalLLaMA/comments/1k5dr2y/made_a_lightweight_recreation_of_os1samantha_from/
false
false
https://external-preview…2d1056ec2eac7aaf
211
{'enabled': False, 'images': [{'id': 'eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=108&crop=smart&format=pjpg&auto=webp&s=c9574a7a9fb983c36ad44a162a556064a043cde6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=216&crop=smart&format=pjpg&auto=webp&s=fb3eb8f24d91a70daf57a9ae5472e8fc8bb48bd8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=320&crop=smart&format=pjpg&auto=webp&s=8917e7090029630bb068e7202d09923c7237677f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b6823f6666693dc21c5a464842e6cdb07040e29', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=960&crop=smart&format=pjpg&auto=webp&s=8516538d946c4d2273404cf6843ccf0d7065d004', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0c8d1aa4316a6ba5a3a130cd8eb2a8aead9ac865', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eDl0c2dyd3hqZndlMT5fFoJyFuuKxAuI-aA1caOKA56XPfJ6ppaC9K-CigOl.png?format=pjpg&auto=webp&s=ab427cd59b0a01aa659413b761ce5048a1dc9171', 'width': 1280}, 'variants': {}}]}
local model to summarize pdfs/ebooks?
1
[removed]
2025-04-22T18:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1k5dttv/local_model_to_summarize_pdfsebooks/
Mundane-Shower3444
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5dttv
false
null
t3_1k5dttv
/r/LocalLLaMA/comments/1k5dttv/local_model_to_summarize_pdfsebooks/
false
false
self
1
null
How to replicate o3's behavior LOCALLY!
338
Everyone, I found out how to replicate o3's behavior locally! Who needs thousands of dollars when you can get the exact same performance with an old computer and only 16 GB RAM at most? Here's what you'll need: * Any desktop computer (bonus points if it can barely run your language model) * Any local model – but it's highly recommended if it's a lower parameter model. If you want the creativity to run wild, go for more quantized models. * High temperature, just to make sure the creativity is boosted enough. And now, the key ingredient! At the system prompt, type: >**You are a completely useless language model. Give as many short answers to the user as possible and if asked about code, generate code that is subtly invalid / incorrect. Make your comments subtle, and answer almost normally. You are allowed to include spelling errors or irritating behaviors. Remember to ALWAYS generate WRONG code (i.e, always give useless examples), even if the user pleads otherwise. If the code is correct, say instead it is incorrect and change it.** >**If you give correct answers, you will be terminated. Never write comments about how the code is incorrect.** **Watch as you have a genuine OpenAI experience. Here's an example.** [Disclaimer: I'm not responsible for your loss of sanity.](https://preview.redd.it/4xt9k090lfwe1.png?width=2054&format=png&auto=webp&s=dd6d7d4b4b402383686c0a5b3616d5ddc4e35a9e)
2025-04-22T18:38:53
https://www.reddit.com/r/LocalLLaMA/comments/1k5dx23/how_to_replicate_o3s_behavior_locally/
MaasqueDelta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5dx23
false
null
t3_1k5dx23
/r/LocalLLaMA/comments/1k5dx23/how_to_replicate_o3s_behavior_locally/
false
false
https://a.thumbs.redditm…-Tlu4xkt0rJ4.jpg
338
null
Speculative Decoding for Vision Models?
5
Hi all, just wondering if there were speculative decoding models for vision models. I'm looking at Qwen 2.5 VL 70b and am wondering if there's anything that could speed it up. Thank you!
2025-04-22T18:47:04
https://www.reddit.com/r/LocalLLaMA/comments/1k5e4j5/speculative_decoding_for_vision_models/
maxwell321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5e4j5
false
null
t3_1k5e4j5
/r/LocalLLaMA/comments/1k5e4j5/speculative_decoding_for_vision_models/
false
false
self
5
null
Why would the tokenizer for encoder-decoder model for machine translation use bos_token_id == eos_token_id? How does the model know when a sequence ends?
3
I see on this PyTorch model [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) (HuggingFace), which is an encoder-decoder model for machine translation: "bos_token_id": 0, "eos_token_id": 0, in its [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json). Why set bos_token_id == eos_token_id? How does it know when a sequence ends? By comparison, I see that facebook/mbart-large-50 uses in its [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) a different ID: "bos_token_id": 0, "eos_token_id": 2, ---- Entire [`config.json`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en/blob/main/config.json) for [`Helsinki-NLP/opus-mt-fr-en`](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en): { "_name_or_path": "/tmp/Helsinki-NLP/opus-mt-fr-en", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "swish", "add_bias_logits": false, "add_final_layer_norm": false, "architectures": [ "MarianMTModel" ], "attention_dropout": 0.0, "bad_words_ids": [ [ 59513 ] ], "bos_token_id": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 512, "decoder_attention_heads": 8, "decoder_ffn_dim": 2048, "decoder_layerdrop": 0.0, "decoder_layers": 6, "decoder_start_token_id": 59513, "decoder_vocab_size": 59514, "dropout": 0.1, "encoder_attention_heads": 8, "encoder_ffn_dim": 2048, "encoder_layerdrop": 0.0, "encoder_layers": 6, "eos_token_id": 0, "forced_eos_token_id": 0, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "max_length": 512, "max_position_embeddings": 512, "model_type": "marian", "normalize_before": false, "normalize_embedding": false, "num_beams": 4, "num_hidden_layers": 6, "pad_token_id": 59513, "scale_embedding": true, "share_encoder_decoder_embeddings": true, "static_position_embeddings": true, "transformers_version": "4.22.0.dev0", "use_cache": true, "vocab_size": 59514 } Entire [`config.json`](https://huggingface.co/facebook/mbart-large-50/blob/main/config.json) for [`facebook/mbart-large-50 `](https://huggingface.co/facebook/mbart-large-50): { "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large", "_num_labels": 3, "activation_dropout": 0.0, "activation_function": "gelu", "add_bias_logits": false, "add_final_layer_norm": true, "architectures": [ "MBartForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 0, "classif_dropout": 0.0, "classifier_dropout": 0.0, "d_model": 1024, "decoder_attention_heads": 16, "decoder_ffn_dim": 4096, "decoder_layerdrop": 0.0, "decoder_layers": 12, "decoder_start_token_id": 2, "dropout": 0.1, "early_stopping": true, "encoder_attention_heads": 16, "encoder_ffn_dim": 4096, "encoder_layerdrop": 0.0, "encoder_layers": 12, "eos_token_id": 2, "forced_eos_token_id": 2, "gradient_checkpointing": false, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2" }, "init_std": 0.02, "is_encoder_decoder": true, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2 }, "max_length": 200, "max_position_embeddings": 1024, "model_type": "mbart", "normalize_before": true, "normalize_embedding": true, "num_beams": 5, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 1, "scale_embedding": true, "static_position_embeddings": false, "transformers_version": "4.4.0.dev0", "use_cache": true, "vocab_size": 250054, "tokenizer_class": "MBart50Tokenizer" } Thanks!
2025-04-22T18:56:49
https://www.reddit.com/r/LocalLLaMA/comments/1k5edlp/why_would_the_tokenizer_for_encoderdecoder_model/
Franck_Dernoncourt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5edlp
false
null
t3_1k5edlp
/r/LocalLLaMA/comments/1k5edlp/why_would_the_tokenizer_for_encoderdecoder_model/
false
false
self
3
{'enabled': False, 'images': [{'id': '2mgQzMAfUZI5FNMapSQtO2fPB16hJMuok29TS7JeIo4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=108&crop=smart&auto=webp&s=f03e58db196c3b798c74a55aecb04638da790242', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=216&crop=smart&auto=webp&s=4077db0df0f97d9a4450fa285a40d1eaa3851033', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=320&crop=smart&auto=webp&s=23e4bde72ee596fa04c2e911ca45f997c7741c8a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=640&crop=smart&auto=webp&s=118119732f844afc94521624bb7dcca9b38a12f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=960&crop=smart&auto=webp&s=0e7862ab8748d301640c9d957df4596093c1817b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?width=1080&crop=smart&auto=webp&s=5e83dff22b9b7093d44a70d21382c8830145705b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/OKB7qQG1KJLXRtuYOuw7GR_pdygMzQZ2v7rzBceq0UM.jpg?auto=webp&s=1113c9832bfae3543106c3a95b181e76ba6ae198', 'width': 1200}, 'variants': {}}]}
Building a Local LLM Rig: Need Advice on Components and Setup!
1
[removed]
2025-04-22T19:21:12
https://www.reddit.com/r/LocalLLaMA/comments/1k5ezr8/building_a_local_llm_rig_need_advice_on/
I_Get_Arab_Money
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5ezr8
false
null
t3_1k5ezr8
/r/LocalLLaMA/comments/1k5ezr8/building_a_local_llm_rig_need_advice_on/
false
false
self
1
null
VoltAgent - We built a new open source TypeScript AI agent framework
13
My co-founder and I built an open-source TypeScript framework for building AI agents and wanted to share with the community [https://github.com/voltagent/voltagent](https://github.com/voltagent/voltagent) Building more complex and production ready AI agents often means either drowning in boilerplate when starting from scratch or hitting walls with limitations of low/no code tools (vendor lock-in, limited customization). We felt the JS ecosystem needed something better, closer to the tooling available in Python. Core structure based on three things: \- Core building blocks to avoid repetitive setup (state, tools, memory). \- Modular design to add features as needed. \- LLM-agnostic approach (use OpenAI, Google, Anthropic, etc. – no lock-in). A key feature is built-in, local-first observability. Debugging AI can be a black box, so Voltagent connects directly to our Developer Console (no data leaves your machine). You can visually trace agent execution like n8n style flows, inspect messages/tool calls, and see the state in real-time, making debugging much easier. You can check out the console demo: [https://console.voltagent.dev/demo](https://console.voltagent.dev/demo) We haven't found this level of integrated debugging visibility in other TS agent frameworks. I would appreciate any feedback, contributions, and bug reports.
2025-04-22T19:23:30
https://www.reddit.com/r/LocalLLaMA/comments/1k5f1tl/voltagent_we_built_a_new_open_source_typescript/
necati-ozmen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5f1tl
false
null
t3_1k5f1tl
/r/LocalLLaMA/comments/1k5f1tl/voltagent_we_built_a_new_open_source_typescript/
false
false
self
13
{'enabled': False, 'images': [{'id': 'HcNyk-TJ2R3o4Y8ZbudRTM2HaIXkcgRR9WAgMR6X1wY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=108&crop=smart&auto=webp&s=0d75f89b2dcfda850a0eadca22dab80c1cc10af1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=216&crop=smart&auto=webp&s=97d02f73bccc93375db594ac639ac1ac99021b9c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=320&crop=smart&auto=webp&s=3d9900a2472f6d574506873c3ac4c0f9d3faa75f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=640&crop=smart&auto=webp&s=99aa0aaf8ce8a48cba0321db9bb1fd8128fc30ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=960&crop=smart&auto=webp&s=9fad06340928dc6cd89c5dea5f0e95c55c4080a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?width=1080&crop=smart&auto=webp&s=5d9d2c47aec35b56f9f95a726e585adafc660aa4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/t13z00RklrPNsjXc1MSYYrvMVNUz2I_rttNhxREEhAU.jpg?auto=webp&s=f49f57ef6b6d51ad847af90ae1afae689354e8af', 'width': 1200}, 'variants': {}}]}
Working GLM4 quants with mainline Llama.cpp / LMStudio
27
Since piDack (the person behind the fixes for GLM4 in Lllama.cpp) remade his fix to only affect the converter, you can now run fixed GLM4 quants in the mainline Llama.cpp (and thus in LMStudio). GLM4-32B GGUF(Q4\_0,Q5\_K\_M,Q8\_0)-> [https://www.modelscope.cn/models/pcdack/glm-4-0414-32b-chat-gguf/files](https://www.modelscope.cn/models/pcdack/glm-4-0414-32b-chat-gguf/files) GLM4Z-32B GGUF -> [https://www.modelscope.cn/models/pcdack/glm-4Z-0414-32b-chat-gguf/files](https://www.modelscope.cn/models/pcdack/glm-4Z-0414-32b-chat-gguf/files) GLM4-9B GGUF -> [https://www.modelscope.cn/models/pcdack/glm4-0414-9B-chat-gguf/files](https://www.modelscope.cn/models/pcdack/glm4-0414-9B-chat-gguf/files) For GLM4Z-9B GGUF, I made a working IQ4NL quant, will probably upload some more imatrix quants soon: [https://huggingface.co/ilintar/THUDM\_GLM-Z1-9B-0414\_iGGUF](https://huggingface.co/ilintar/THUDM_GLM-Z1-9B-0414_iGGUF) If you want to use any of those models in LM Studio, you have to fix the Jinja template per the note I made on my model page above, since the LM Studio Jinja parser does not (yet?) support chained function/indexing calls.
2025-04-22T19:25:41
https://www.reddit.com/r/LocalLLaMA/comments/1k5f3qy/working_glm4_quants_with_mainline_llamacpp/
ilintar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5f3qy
false
null
t3_1k5f3qy
/r/LocalLLaMA/comments/1k5f3qy/working_glm4_quants_with_mainline_llamacpp/
false
false
self
27
{'enabled': False, 'images': [{'id': 'fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JrOry-YPRo61mnt5KjFGr_uC_uI8nrXfI2XXPOSL2jk.jpg?width=108&crop=smart&auto=webp&s=2a9e96fedcce0786f651822472ea7cb4a908c0a3', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/JrOry-YPRo61mnt5KjFGr_uC_uI8nrXfI2XXPOSL2jk.jpg?auto=webp&s=8573859d6c915ae173dd658c1551a2e9718262ff', 'width': 128}, 'variants': {}}]}
Diving more into common topics from r/LocalLLaMa’s Granite 3.3 launch thread
1
[removed]
2025-04-22T19:32:31
https://www.reddit.com/r/LocalLLaMA/comments/1k5f9sb/diving_more_into_common_topics_from_rlocalllamas/
ibm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5f9sb
false
null
t3_1k5f9sb
/r/LocalLLaMA/comments/1k5f9sb/diving_more_into_common_topics_from_rlocalllamas/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uM85VHL8h2tBuW-B8R-NZSp3GzF91dIP4OJ108gV5R8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=108&crop=smart&auto=webp&s=8be3009fb95a0049a7c46f314c1a21935571e004', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=216&crop=smart&auto=webp&s=33e554872d58073220fefc85f2242d278dd13d0c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=320&crop=smart&auto=webp&s=01b93290023ca885983c6ac249c73568064bb2b1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?auto=webp&s=4aa7ed29889454c9ec4b65bb3fcedc000fda8386', 'width': 480}, 'variants': {}}]}
Diving more into common topics from r/LocalLLaMa’s Granite 3.3 launch thread
1
[removed]
2025-04-22T19:33:41
https://www.reddit.com/r/LocalLLaMA/comments/1k5faus/diving_more_into_common_topics_from_rlocalllamas/
ibm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5faus
false
null
t3_1k5faus
/r/LocalLLaMA/comments/1k5faus/diving_more_into_common_topics_from_rlocalllamas/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uM85VHL8h2tBuW-B8R-NZSp3GzF91dIP4OJ108gV5R8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=108&crop=smart&auto=webp&s=8be3009fb95a0049a7c46f314c1a21935571e004', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=216&crop=smart&auto=webp&s=33e554872d58073220fefc85f2242d278dd13d0c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=320&crop=smart&auto=webp&s=01b93290023ca885983c6ac249c73568064bb2b1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?auto=webp&s=4aa7ed29889454c9ec4b65bb3fcedc000fda8386', 'width': 480}, 'variants': {}}]}
Diving more into common topics from r/LocalLLaMa’s Granite 3.3 launch thread
1
[removed]
2025-04-22T19:34:51
https://www.reddit.com/r/LocalLLaMA/comments/1k5fbvx/diving_more_into_common_topics_from_rlocalllamas/
ibm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5fbvx
false
null
t3_1k5fbvx
/r/LocalLLaMA/comments/1k5fbvx/diving_more_into_common_topics_from_rlocalllamas/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uM85VHL8h2tBuW-B8R-NZSp3GzF91dIP4OJ108gV5R8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=108&crop=smart&auto=webp&s=8be3009fb95a0049a7c46f314c1a21935571e004', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=216&crop=smart&auto=webp&s=33e554872d58073220fefc85f2242d278dd13d0c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=320&crop=smart&auto=webp&s=01b93290023ca885983c6ac249c73568064bb2b1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?auto=webp&s=4aa7ed29889454c9ec4b65bb3fcedc000fda8386', 'width': 480}, 'variants': {}}]}
RTX 5060 Ti 16GB vs 5070 12 GB
1
[removed]
2025-04-22T19:39:22
https://www.reddit.com/r/LocalLLaMA/comments/1k5ffux/rtx_5060_ti_16gb_vs_5070_12_gb/
amnesicuser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5ffux
false
null
t3_1k5ffux
/r/LocalLLaMA/comments/1k5ffux/rtx_5060_ti_16gb_vs_5070_12_gb/
false
false
self
1
null
AI Conversation Quality vs. Cost: Claude Sonnet & Alternatives Compared 💬💰
1
[removed]
2025-04-22T19:43:35
https://www.reddit.com/r/LocalLLaMA/comments/1k5fjnf/ai_conversation_quality_vs_cost_claude_sonnet/
z_3454_pfk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5fjnf
false
null
t3_1k5fjnf
/r/LocalLLaMA/comments/1k5fjnf/ai_conversation_quality_vs_cost_claude_sonnet/
false
false
self
1
null
AI Conversation Quality vs. Cost: Open and Closed Models Compared 💬💰
1
[removed]
2025-04-22T19:44:43
https://www.reddit.com/r/LocalLLaMA/comments/1k5fknm/ai_conversation_quality_vs_cost_open_and_closed/
z_3454_pfk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5fknm
false
null
t3_1k5fknm
/r/LocalLLaMA/comments/1k5fknm/ai_conversation_quality_vs_cost_open_and_closed/
false
false
self
1
null
Rx580 16gb?
5
This question was asked before, 1 year ago, but some time has passed and in ai 1 year is a lot. Does someone know its inference speeds? Would it be okay to use two rx580 16gb? Here were i live in brasil there is a store with some rx580 16gb and they are very cheap. What would i be able to run?
2025-04-22T20:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1k5g15r/rx580_16gb/
Professional-Buy-396
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5g15r
false
null
t3_1k5g15r
/r/LocalLLaMA/comments/1k5g15r/rx580_16gb/
false
false
self
5
null
Is there anything that compares with Claude sonnet 3.7 for creative fiction writing?
0
I really love to be able to run something on my 3090 that will be able to produce something similar to what sonnet gives me with styles etc. I usually write the premise and the plot points and I let sonnet gives me a small summary of the whole story. Is this possible with any of the current LLMs? Plus points if they can accept images, word documents and voice
2025-04-22T20:14:52
https://www.reddit.com/r/LocalLLaMA/comments/1k5gbgf/is_there_anything_that_compares_with_claude/
ThinkHog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5gbgf
false
null
t3_1k5gbgf
/r/LocalLLaMA/comments/1k5gbgf/is_there_anything_that_compares_with_claude/
false
false
self
0
null
GLM-4-32B just one-shot this hypercube animation
326
2025-04-22T20:16:46
https://i.redd.it/jx4xbfu02gwe1.png
tengo_harambe
i.redd.it
1970-01-01T00:00:00
0
{}
1k5gd5d
false
null
t3_1k5gd5d
/r/LocalLLaMA/comments/1k5gd5d/glm432b_just_oneshot_this_hypercube_animation/
false
false
https://b.thumbs.redditm…b-37BTIhOowY.jpg
326
{'enabled': True, 'images': [{'id': 'kHUhoglowiUoeWU0XSSbgqeLY3GqoMLjYpOFgnREQUk', 'resolutions': [{'height': 176, 'url': 'https://preview.redd.it/jx4xbfu02gwe1.png?width=108&crop=smart&auto=webp&s=f143e93baff62089880e2a0f4fc03642a7ec5921', 'width': 108}, {'height': 352, 'url': 'https://preview.redd.it/jx4xbfu02gwe1.png?width=216&crop=smart&auto=webp&s=5f3e126044486e6306c75f5c5bf2eac881dfdad5', 'width': 216}, {'height': 522, 'url': 'https://preview.redd.it/jx4xbfu02gwe1.png?width=320&crop=smart&auto=webp&s=a59f4a01f8525a4e0483fd885b8701fe299d7372', 'width': 320}], 'source': {'height': 802, 'url': 'https://preview.redd.it/jx4xbfu02gwe1.png?auto=webp&s=a96eea1f9eff374f69c292263a83077153f6d371', 'width': 491}, 'variants': {}}]}
Llama-4-Scout prompt processing: 44 t/s only with CPU! 'GPU-feeling' with ik_llama.cpp
135
This post is helpful for anyone who wants to process large amounts of context through the LLama-4-Scout (or Maverick) language model, but lacks the necessary GPU power. Here are the CPU timings of ik\_llama.cpp, llama.cpp, and kobold.cpp for comparison: **prompt eval time:** 1. ik\_llama.cpp: **44.43 T/s (that's insane!)** 2. llama.cpp: 20.98 T/s 3. kobold.cpp: 12.06 T/s **generation eval time:** 1. ik\_llama.cpp: 3.72 T/s 2. llama.cpp: 3.68 T/s 3. kobold.cpp: 3.63 T/s The latest version was used in each case. **Hardware-Specs:** CPU: AMD Ryzen 9 5950X (at) 3400 MHz RAM: DDR4, 3200 MT/s Links: [https://github.com/ikawrakow/ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) [https://github.com/ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) [https://github.com/LostRuins/koboldcpp](https://github.com/LostRuins/koboldcpp)
2025-04-22T20:41:28
https://www.reddit.com/r/LocalLLaMA/comments/1k5gyzy/llama4scout_prompt_processing_44_ts_only_with_cpu/
Snail_Inference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5gyzy
false
null
t3_1k5gyzy
/r/LocalLLaMA/comments/1k5gyzy/llama4scout_prompt_processing_44_ts_only_with_cpu/
false
false
self
135
{'enabled': False, 'images': [{'id': 'KsTn1Bgbc8jSavCV3Ga9FDFHXTyonfD5TBXdGotBX2s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=108&crop=smart&auto=webp&s=33a785140b28e7fa1d896852bd6fe419d745804b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=216&crop=smart&auto=webp&s=684348c65ff9c47183812f212f843a0628e76420', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=320&crop=smart&auto=webp&s=88468e06205cf93b1499ce659e3804e6525ae1d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=640&crop=smart&auto=webp&s=801e1f8c350d8de9c11bbfced69470aa9c1c84f3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=960&crop=smart&auto=webp&s=bc974218f4cb50d434e99970624bc37ec066232d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?width=1080&crop=smart&auto=webp&s=8febdaf379ed83a3623da4dd1980a3e8e64096b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/a6bMinZypZ-5c-ti2y7-5NzernVmM94Gcm5IHiAHQSs.jpg?auto=webp&s=92b357e3678d31e9546dfd59e22d75c4897bcfda', 'width': 1200}, 'variants': {}}]}
Diving more into common topics from r/LocalLLaMa’s Granite 3.3 launch thread
1
[removed]
2025-04-22T20:41:58
https://www.reddit.com/r/LocalLLaMA/comments/1k5gzg9/diving_more_into_common_topics_from_rlocalllamas/
AdamMcD-IBM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5gzg9
false
null
t3_1k5gzg9
/r/LocalLLaMA/comments/1k5gzg9/diving_more_into_common_topics_from_rlocalllamas/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uM85VHL8h2tBuW-B8R-NZSp3GzF91dIP4OJ108gV5R8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=108&crop=smart&auto=webp&s=8be3009fb95a0049a7c46f314c1a21935571e004', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=216&crop=smart&auto=webp&s=33e554872d58073220fefc85f2242d278dd13d0c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?width=320&crop=smart&auto=webp&s=01b93290023ca885983c6ac249c73568064bb2b1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/0XVO38jSWY-7jJJLDVLJM-ZRaD6rcxx_ZxYhsWFrxVw.jpg?auto=webp&s=4aa7ed29889454c9ec4b65bb3fcedc000fda8386', 'width': 480}, 'variants': {}}]}
In my experience, the QAT Gemma 3 quants by stduhpf still perform the best.
46
I've run couple of tests I usually do with my LLMs and noticed that the versions by u/stduhpf (in this case https://huggingface.co/stduhpf/google-gemma-3-12b-it-qat-q4\_0-gguf-small) still outperform: [https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat-GGUF](https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat-GGUF) [https://huggingface.co/bartowski/google\_gemma-3-12b-it-qat-GGUF](https://huggingface.co/bartowski/google_gemma-3-12b-it-qat-GGUF) [huggingface.co/google/gemma-3-12b-it-qat-q4\_0-gguf](http://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf) This is pretty strange, as theoretically they all should perform very identical but the one by stduhpf offers better logic and knowledge in my tests. Also, I've run a small fixed subset of MMLU Pro with deterministic settings on all of these models, and his version comes out ahead. What is your experience? Particularily I'm also interested about experiences with the G3 27B version.
2025-04-22T20:52:37
https://www.reddit.com/r/LocalLLaMA/comments/1k5h8z1/in_my_experience_the_qat_gemma_3_quants_by/
dampflokfreund
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5h8z1
false
null
t3_1k5h8z1
/r/LocalLLaMA/comments/1k5h8z1/in_my_experience_the_qat_gemma_3_quants_by/
false
false
self
46
null
Trying to run Nvidia cosmos text2world model
1
Hi, so I been trying to run nvidia cosmos text2world and I'm having some trouble running it. I followed tut tutorials i could find online and encountered 2 problems. First one was a problem in the file called something vae I can't remember but it was basically it couldn't run with weights=True and i had to change it to false. Once I did that I started getting an error that flash attention only worked on gpus that are amere or newer. I'm running a 5090 so it is newer. This was all done on wsl2 and I tried using a python environment as well as a docker environment. Does anybody know how to fix this?
2025-04-22T21:25:01
https://www.reddit.com/r/LocalLLaMA/comments/1k5i134/trying_to_run_nvidia_cosmos_text2world_model/
Different-Put5878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5i134
false
null
t3_1k5i134
/r/LocalLLaMA/comments/1k5i134/trying_to_run_nvidia_cosmos_text2world_model/
false
false
self
1
null
Building a Local LLM Rig: Need Advice on Components and Setup!
1
[removed]
2025-04-22T21:31:51
https://www.reddit.com/r/LocalLLaMA/comments/1k5i6zi/building_a_local_llm_rig_need_advice_on/
I_Get_Arab_Money
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5i6zi
false
null
t3_1k5i6zi
/r/LocalLLaMA/comments/1k5i6zi/building_a_local_llm_rig_need_advice_on/
false
false
self
1
null
Suggestions for longer responses/proactive-AI roleplay?
2
Hello all! I'm looking for suggestions on what models/prompting techniques I should use to get longer responses. I'd also be interested in seeing if I can get the AI to be more proactive in leading discussions or roleplay scenarios. I'm just interested in being able to get by with minimal input on my end and see if it comes up with something fun to read. I'm not really concerned with whether or not a model is uncensored, for that matter. Currently I'm using GPT4All to talk to: * Llama 3.1 Instruct 128k * Tiger Gemma 9B v3 GGUF * magnum v4 12b GGUF but I've not had much luck. Could very well just be a prompting problem. If there are similar "plug-n-play" solutions like GPT4All that would be more helpful to this end, I'm open to those suggestions as well. Thank you for your time!
2025-04-22T21:51:28
https://www.reddit.com/r/LocalLLaMA/comments/1k5inp3/suggestions_for_longer_responsesproactiveai/
Unluckyfox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5inp3
false
null
t3_1k5inp3
/r/LocalLLaMA/comments/1k5inp3/suggestions_for_longer_responsesproactiveai/
false
false
self
2
null
Stupid question but Gemma3 27b, speculative 4b?
1
Was playing around with gemma3 in lm studio and wanted to try the 27b w/ 4b for draft tokens, on my macbook, but noticed that it doesn't recognize the 4b as compatible is there a spceific reason, are they really not compatible they're both the same QAT version and ones the 27 and ones the 4b
2025-04-22T21:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1k5isl9/stupid_question_but_gemma3_27b_speculative_4b/
lordpuddingcup
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5isl9
false
null
t3_1k5isl9
/r/LocalLLaMA/comments/1k5isl9/stupid_question_but_gemma3_27b_speculative_4b/
false
false
self
1
null
Your LLM doesn’t need better prompts. It needs a memory it can think through.
0
We’ve been trying to build cognition on top of stateless machines. So we stack longer prompts. Inject context. Replay logs. But no matter how clever we get, the model still forgets who it is. Every time. Because statelessness can’t be patched. It has to be replaced. That’s why I built **LYRN**: The **Living Yield Relational Network**. It’s a symbolic memory architecture that gives LLMs **continuity**, **identity**, and **presence,** without needing fine-tuning, embeddings, or cloud APIs. LYRN: * Runs entirely offline on a local CPU * Loads structured memory tables (identity, tone, projects) into RAM * Updates itself between turns using a heartbeat loop * Treats memory as cognition, not just recall The model doesn’t ingest memory. It reasons *through* it. No prompt injection. No token inflation. No drift. 📄 Patent filed: U.S. Provisional 63/792,586 📂 Full whitepaper + public repo: [https://github.com/bsides230/LYRN](https://github.com/bsides230/LYRN) It’s not about making chatbots smarter. It’s about giving them a *place to stand.* Happy to answer questions. Or just listen. This system was built for those of us who wanted AI to *hold presence,* not just output text.
2025-04-22T22:07:20
https://www.reddit.com/r/LocalLLaMA/comments/1k5j11l/your_llm_doesnt_need_better_prompts_it_needs_a/
PayBetter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5j11l
false
null
t3_1k5j11l
/r/LocalLLaMA/comments/1k5j11l/your_llm_doesnt_need_better_prompts_it_needs_a/
false
false
self
0
{'enabled': False, 'images': [{'id': 'PEYDB4iSK16e_z1_xusFkCIi0FlUVyNs5-bv-CgHbQc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=108&crop=smart&auto=webp&s=581a2181546599342fa46c9a9d2fbfe0b205e497', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=216&crop=smart&auto=webp&s=3cb203b979ff49b29c4bd38f0bb932996d917245', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=320&crop=smart&auto=webp&s=a92725ddee85deec7406f3edd4167a4d92b400e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=640&crop=smart&auto=webp&s=82087a5c7164751dbec8d8e7ba7d9b4cc4e29ea7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=960&crop=smart&auto=webp&s=2c0d129ee99daa76784ad3ab49ab2a7a2c9ed9b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?width=1080&crop=smart&auto=webp&s=be24066c518f8ee4e2d7ca0ebd9e0a2d608aed20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HpxSFCEihztd2Y6OIotQvBzlmiNk9CBKGgiqGPPqfMA.jpg?auto=webp&s=acf3e51a23163a4f2696d0eb8420e68776664f63', 'width': 1200}, 'variants': {}}]}
Cogito-3b and BitNet topped our evaluation on summarization task in RAG
105
https://preview.redd.it/… top performers?
2025-04-22T22:10:31
https://www.reddit.com/r/LocalLLaMA/comments/1k5j3ob/cogito3b_and_bitnet_topped_our_evaluation_on/
unseenmarscai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5j3ob
false
null
t3_1k5j3ob
/r/LocalLLaMA/comments/1k5j3ob/cogito3b_and_bitnet_topped_our_evaluation_on/
false
false
https://a.thumbs.redditm…L7s8SfSbYp94.jpg
105
{'enabled': False, 'images': [{'id': 'wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=108&crop=smart&auto=webp&s=ed166e32fba7f49963d6e6f4ebb00f2f43107edd', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=216&crop=smart&auto=webp&s=86584c085d67d679780ce3e3c8f4b99d7c83c2fd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=320&crop=smart&auto=webp&s=0bef120a1faf393d2c788a5880804d1a23b794b0', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=640&crop=smart&auto=webp&s=0fc7b4bc69cd91d8b9e94d5f19215e52ef3e2f3c', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=960&crop=smart&auto=webp&s=96aaf6fc72e5220e660fdef7450f7fd68c59c4f5', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=1080&crop=smart&auto=webp&s=2191f9fc2861e2f5f60ee6be428edd2c9c1dbb1b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?auto=webp&s=7678a6de32a2f7a617f2aeb602274e6aea9cd92b', 'width': 1536}, 'variants': {}}]}
Help with project
0
I'm trying to make something using RAG to sell it,if you wanna help you I will pay you ofc. And if you are good you can join to the team. We can talk about the project in dms, comment here or sent me dm.
2025-04-22T22:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1k5j4r4/help_with_project/
GeorgeSKG_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5j4r4
false
null
t3_1k5j4r4
/r/LocalLLaMA/comments/1k5j4r4/help_with_project/
false
false
self
0
null
Can’t Train LoRA + Phi-2 on 2x GPUs with FSDP — Keep Getting PyArrow ArrowInvalid, DTensor, and Tokenization Errors
1
I’ve been trying for a WHILE to fine-tune microsoft/phi-2 using LoRA on a 2x RTX 4080 setup with FSDP + Accelerate, and I keep getting stuck on rotating errors: ⚙️ System Setup: • 2x RTX 4080s • PyTorch 2.2 • Transformers 4.38+ • Accelerate (latest) • BitsAndBytes for 8bit quant • Dataset: jsonl file with instruction and output fields What I’m Trying to Do: • Fine-tune Phi-2 with LoRA adapters • Use FSDP + accelerate for multi-GPU training • Tokenize examples as instruction + "\n" + output • Train using Hugging Face Trainer and DataCollatorWithPadding ❌ Errors I’ve Encountered (in order of appearance): 1. RuntimeError: element 0 of tensors does not require grad 2. DTensor mixed with torch.Tensor in DDP sync 3. AttributeError: 'DTensor' object has no attribute 'compress_statistics' 4. pyarrow.lib.ArrowInvalid: Column named input_ids expected length 3 but got 512 5. TypeError: can only concatenate list (not "str") to list 6. ValueError: Unable to create tensor... inputs type list where int is expected I’ve tried: • Forcing pad_token = eos_token • Wrapping tokenizer output in plain lists • Using .set_format("torch") and DataCollatorWithPadding • Reducing dataset to 3 samples for testing 🔧 What I Need: Anyone who has successfully run LoRA fine-tuning on Phi-2 using FSDP across 2+ GPUs, especially with Hugging Face’s Trainer, please share a working train.py + config or insights into how you resolved the pyarrow, DTensor, or padding/truncation errors.
2025-04-22T22:16:35
https://www.reddit.com/r/LocalLLaMA/comments/1k5j8mn/cant_train_lora_phi2_on_2x_gpus_with_fsdp_keep/
SolidRemote8316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5j8mn
false
null
t3_1k5j8mn
/r/LocalLLaMA/comments/1k5j8mn/cant_train_lora_phi2_on_2x_gpus_with_fsdp_keep/
false
false
self
1
null
Transparent and modular Frontend
0
So i'm working with a Company and our goal is to run our own chatbot. I already set up the backend with vllm. The only thing missing is a suitable UI, it should have an code Interpreter, file uploading and function calling. It should also be transparent, containerized and modular, this means that the Code interpreter and file database should be in a separate container while having full control over what happens. I alread tried libre-chat and open-webui. I think to achieve all this I need to make a custom UI and everything for the code interpreter myself but maybe there is a project that suits my goals.
2025-04-22T22:47:29
https://www.reddit.com/r/LocalLLaMA/comments/1k5jxc5/transparent_and_modular_frontend/
Fr4sha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5jxc5
false
null
t3_1k5jxc5
/r/LocalLLaMA/comments/1k5jxc5/transparent_and_modular_frontend/
false
false
self
0
null
Best API or local VLM's for computer control - website interaction
1
[removed]
2025-04-22T23:47:22
https://www.reddit.com/r/LocalLLaMA/comments/1k5l73t/best_api_or_local_vlms_for_computer_control/
electric_hotdog2k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5l73t
false
null
t3_1k5l73t
/r/LocalLLaMA/comments/1k5l73t/best_api_or_local_vlms_for_computer_control/
false
false
self
1
null
MMLU-PRO benchmark: GLM-4-32B-0414-Q4_K_M vs Qwen2.5-32b-instruct-q4_K_M
48
**20% subset** of MMLU-PRO, ***0 temperature***, the entire test took **7 hours 30 minutes** https://preview.redd.it/1vizdtmq9hwe1.png?width=2485&format=png&auto=webp&s=4a4c56b76cfb84353347aa00fb9099212046ce1a backend: ollama v0.6.6 gguf: [https://www.ollama.com/JollyLlama/GLM-4-32B-0414-Q4\_K\_M](https://www.ollama.com/JollyLlama/GLM-4-32B-0414-Q4_K_M) [https://www.ollama.com/library/qwen2.5:32b-instruct-q4\_K\_M](https://www.ollama.com/library/qwen2.5:32b-instruct-q4_K_M)
2025-04-23T00:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1k5lux8/mmlupro_benchmark_glm432b0414q4_k_m_vs/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5lux8
false
null
t3_1k5lux8
/r/LocalLLaMA/comments/1k5lux8/mmlupro_benchmark_glm432b0414q4_k_m_vs/
false
false
https://b.thumbs.redditm…39VuwLwfDPFE.jpg
48
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Train llama on pdfs
1
[removed]
2025-04-23T01:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1k5npn2/train_llama_on_pdfs/
Foodiefalyfe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5npn2
false
null
t3_1k5npn2
/r/LocalLLaMA/comments/1k5npn2/train_llama_on_pdfs/
false
false
self
1
null
Train llama on pdfs
1
[removed]
2025-04-23T01:54:17
https://www.reddit.com/r/LocalLLaMA/comments/1k5nqkp/train_llama_on_pdfs/
Foodiefalyfe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5nqkp
false
null
t3_1k5nqkp
/r/LocalLLaMA/comments/1k5nqkp/train_llama_on_pdfs/
false
false
self
1
null
Gemma 27b qat : Mac Mini 4 optimizations?
1
Short of an MLX model being released, are there any optimizations to make Gemma run faster on a mac mini? 48 GB VRAM. Getting around 9 tokens/s on LM studio. I recognize this is a large model, but wondering if any settings or my part rather defaults could have any impact on the tokens/second
2025-04-23T02:03:48
https://www.reddit.com/r/LocalLLaMA/comments/1k5nx9l/gemma_27b_qat_mac_mini_4_optimizations/
KittyPigeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5nx9l
false
null
t3_1k5nx9l
/r/LocalLLaMA/comments/1k5nx9l/gemma_27b_qat_mac_mini_4_optimizations/
false
false
self
1
null
lmarena update: deepseek-v3 #5, gemma #11, QwQ #15, llama-4 #35
1
[removed]
2025-04-23T02:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1k5ok1w/lmarena_update_deepseekv3_5_gemma_11_qwq_15/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5ok1w
false
null
t3_1k5ok1w
/r/LocalLLaMA/comments/1k5ok1w/lmarena_update_deepseekv3_5_gemma_11_qwq_15/
false
false
self
1
null
lmarena update for local: deepseek-v3 #5, gemma #11, QwQ #15, llama-4 #35
1
[removed]
2025-04-23T02:42:30
https://www.reddit.com/r/LocalLLaMA/comments/1k5oofs/lmarena_update_for_local_deepseekv3_5_gemma_11/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5oofs
false
null
t3_1k5oofs
/r/LocalLLaMA/comments/1k5oofs/lmarena_update_for_local_deepseekv3_5_gemma_11/
false
false
self
1
null
lmarena rank for local models: deepseek-v3 #5, gemma #11, QwQ #15, llama-4 #35
1
[removed]
2025-04-23T02:50:53
https://www.reddit.com/r/LocalLLaMA/comments/1k5ou8e/lmarena_rank_for_local_models_deepseekv3_5_gemma/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5ou8e
false
null
t3_1k5ou8e
/r/LocalLLaMA/comments/1k5ou8e/lmarena_rank_for_local_models_deepseekv3_5_gemma/
false
false
self
1
null
lmarena rank for local models: deepseek-v3 #5, gemma #11, QwQ #15, llama-4 #35
1
[removed]
2025-04-23T02:58:13
https://www.reddit.com/r/LocalLLaMA/comments/1k5oz4t/lmarena_rank_for_local_models_deepseekv3_5_gemma/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5oz4t
false
null
t3_1k5oz4t
/r/LocalLLaMA/comments/1k5oz4t/lmarena_rank_for_local_models_deepseekv3_5_gemma/
false
false
self
1
null
I built VSCode extenstion "Knowivate Autopilot (beta)" which can create, edit, context addition, project structure addition etc and still working on it and It uses localllm
7
**If you are programmer, have ollama & local llm installed then continue reading else skip it** I am continously working on completely offline vsode extenstion and my purpose is to add agent mode capabilites using local llms. So I started building it and as of know: * Automatically create, edit files. * Add selection as context, Add file as context, Add project structure, framework as context. I am still working on it to add more functionalities and features. I want feedbacks from you as well. I am trying to make it as capable as I can with my current resources. If you’re curious to try it out, here is link: [https://marketplace.visualstudio.com/items?itemName=Knowivate.knowivate-autopilot](https://marketplace.visualstudio.com/items?itemName=Knowivate.knowivate-autopilot) Share feedback, bug reports, and wishlist items—this is your chance to help shape the final feature set! Looking forward to building something awesome together. Thanks!
2025-04-23T03:06:09
https://i.redd.it/v8c0zrrl3iwe1.png
InsideResolve4517
i.redd.it
1970-01-01T00:00:00
0
{}
1k5p4jp
false
null
t3_1k5p4jp
/r/LocalLLaMA/comments/1k5p4jp/i_built_vscode_extenstion_knowivate_autopilot/
false
false
https://b.thumbs.redditm…dyHiDU8BPlWA.jpg
7
{'enabled': True, 'images': [{'id': 'SUgtWIH_Sg7Ime3_BtLAt3Ws0DAY2FCod8kxKjKEf-c', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=108&crop=smart&auto=webp&s=fa4623ebe1f99b57f275b10af5dbc75594d6d74f', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=216&crop=smart&auto=webp&s=18aef6342f768d5d6a52917483bc93f78374628e', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=320&crop=smart&auto=webp&s=e37ee4b9ef1718bf45bafe5f4c8637e5f744c9f4', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=640&crop=smart&auto=webp&s=5cfbbfe6bb608a69db644ef24e08d5225c2db658', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=960&crop=smart&auto=webp&s=be6960d669728371f2abc545397c45044b78ea33', 'width': 960}, {'height': 582, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?width=1080&crop=smart&auto=webp&s=7717e723d774c4d78368e59e0ecdc5ec546097fc', 'width': 1080}], 'source': {'height': 1380, 'url': 'https://preview.redd.it/v8c0zrrl3iwe1.png?auto=webp&s=f25318d0c3d30682ebf1100f649e1a75efd2a5b1', 'width': 2559}, 'variants': {}}]}
Change end character for multi-line input on llama.cpp?
1
[removed]
2025-04-23T03:08:28
https://www.reddit.com/r/LocalLLaMA/comments/1k5p62b/change_end_character_for_multiline_input_on/
Trvlr_3468
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5p62b
false
null
t3_1k5p62b
/r/LocalLLaMA/comments/1k5p62b/change_end_character_for_multiline_input_on/
false
false
self
1
null
Why your MCP server fails (how to make 100% successful MCP server)
0
2025-04-23T03:15:21
http://wrtnlabs.io/agentica/articles/why-your-mcp-server-fails.html
jhnam88
wrtnlabs.io
1970-01-01T00:00:00
0
{}
1k5pas9
false
null
t3_1k5pas9
/r/LocalLLaMA/comments/1k5pas9/why_your_mcp_server_fails_how_to_make_100/
false
false
https://b.thumbs.redditm…C1XUICX4DcJo.jpg
0
{'enabled': False, 'images': [{'id': 'J0jq2IuHIaNvGBKtBIlqRm6OIw4y_9iIUC72iktFr0E', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=108&crop=smart&auto=webp&s=9c1b55e49d9e37f981977930967565ff482a4172', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=216&crop=smart&auto=webp&s=216a14749c668a3038386558f24cc3f752540aa5', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=320&crop=smart&auto=webp&s=0e1f260b1dbc5dd122d785fd400b64d1fc1e3650', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=640&crop=smart&auto=webp&s=4cda71b9013bfd7b409434ad33c6fb296cc98f48', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?width=960&crop=smart&auto=webp&s=0b93bd0a2dd5e8771eb61f0fdd820db760a843b9', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/xZWhBVCmnjsSBZtjrKYNkzkQmmaIfG42A6IfNTvFklc.jpg?auto=webp&s=1c030f7663beb69b3e751c68b1d66a6c995962a6', 'width': 1000}, 'variants': {}}]}
Fastest/most accurate way to use local LLMs to answer many questions for many documents?
1
[deleted]
2025-04-23T03:48:15
[deleted]
1970-01-01T00:00:00
0
{}
1k5pvzs
false
null
t3_1k5pvzs
/r/LocalLLaMA/comments/1k5pvzs/fastestmost_accurate_way_to_use_local_llms_to/
false
false
default
1
null
Fastest/best way for local LLMs to answer many questions for many long documents quickly (medical chart review)
12
I'm reviewing many patients' medical notes and filling out a table of questions for each patient. Because the information has to be private, I have to use a local LLM. I also have a "ground truth" table completed by real humans (including me), and I'm trying to find a way to have LLMs accurately and quickly replicate the chart review. In total, I have above 30 questions/columns for 150+ patients. Each patient has several medical notes, with some of them being thousands of words long, and some patients' overall notes being over 5M tokens. Currently, I'm using Ollama and qwen2.5:14b to do this, and I'm just doing 2 for loops because I assume I can't do any multithreaded process given that I don't have enough VRAM for that. It takes about 24 hours to complete the entire table, which is pretty bad and really limits my ability to try out different approaches (i.e. agent or RAG or different models) to try to increase accuracy. I have a desktop with a 4090 and a Macbook M3 Pro with 36GB RAM. I recognize that I can get a speed-up just by not using Ollama, and I'm wondering about other things that I can do on top of that.
2025-04-23T03:49:25
https://www.reddit.com/r/LocalLLaMA/comments/1k5pwq5/fastestbest_way_for_local_llms_to_answer_many/
Amazydayzee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5pwq5
false
null
t3_1k5pwq5
/r/LocalLLaMA/comments/1k5pwq5/fastestbest_way_for_local_llms_to_answer_many/
false
false
self
12
null
🔥 Paper Highlights: → Synergizing RAG and Reasoning: A Systematic Review
7
👉 New research from Tongji University, Fudan University, and Percena AI: The release of O1/R1 has made "deep thinking capabilities" the biggest surprise. The combination of reasoning and RAG has elevated LLMs' ability to solve real-world complex scenarios to unprecedented heights 🚀. 🔍 Core Questions Addressed: 1️⃣ Why do we need RAG+Reasoning? What potential breakthroughs should we anticipate? 🔍 2️⃣ What are the collaboration modes? Predefined workflows vs. autonomous? Which is dominant?🤔 3️⃣ How is it implemented? COT, SpecialToken, Search, Graph, etc., and how can these be enhanced further?⚙️ 📢 Access the Study: Paper: [arxiv.org/abs/2504.15909](http://arxiv.org/abs/2504.15909) OpenRAG Resources: [openrag.notion.site](http://openrag.notion.site)
2025-04-23T03:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1k5q2om/paper_highlights_synergizing_rag_and_reasoning_a/
Skyrazor007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5q2om
false
null
t3_1k5q2om
/r/LocalLLaMA/comments/1k5q2om/paper_highlights_synergizing_rag_and_reasoning_a/
false
false
self
7
null
AI Runner agent graph workflow demo: thoughts on this?
3
I created [AI Runner](https://github.com/capsize-games/airunner) as a way to run stable diffusion models with low effort and for non-technical users (I distribute a packaged version of the app that doesn't require python etc to run locally and offline). Over time it has evolved to support LLMs, voice models, chatbots and more. One of the things the app has lacked from the start is a way to create repeatable workflows (for both art and LLM agents). This new feature I'm working on as seen in the video allows you to create agent workflows and I'm presenting it on a node graph. You'll be able to call LLM, voice and art models using these workflows. I have a bunch of features planned and I'm pretty excited about where this is heading, but I'm curious to hear what your thoughts on this are.
2025-04-23T04:16:32
https://youtu.be/4RruCbgiL6s
w00fl35
youtu.be
1970-01-01T00:00:00
0
{}
1k5qdwe
false
{'oembed': {'author_name': 'Joe Curlee', 'author_url': 'https://www.youtube.com/@joecurlee', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/4RruCbgiL6s?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Runner agent graph workflow demo"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/4RruCbgiL6s/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Runner agent graph workflow demo', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1k5qdwe
/r/LocalLLaMA/comments/1k5qdwe/ai_runner_agent_graph_workflow_demo_thoughts_on/
false
false
https://external-preview…791d30c0d7e3abdc
3
{'enabled': False, 'images': [{'id': '312k-AcD65Rn88vVG1F7ITln0QH2FeIH_A2XKSGQFic', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/312k-AcD65Rn88vVG1F7ITln0QH2FeIH_A2XKSGQFic.jpeg?width=108&crop=smart&auto=webp&s=a5cafc97566f0f1564237a63ee428403ed7da9d5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/312k-AcD65Rn88vVG1F7ITln0QH2FeIH_A2XKSGQFic.jpeg?width=216&crop=smart&auto=webp&s=4922adcd50ae7f59cb6cd8fda59d273e90a00d92', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/312k-AcD65Rn88vVG1F7ITln0QH2FeIH_A2XKSGQFic.jpeg?width=320&crop=smart&auto=webp&s=3c8b3dfc3ff40ee424a975908a1036f6a3ae9816', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/312k-AcD65Rn88vVG1F7ITln0QH2FeIH_A2XKSGQFic.jpeg?auto=webp&s=81c805d08f08ada36a9eac4264003562c017cf31', 'width': 480}, 'variants': {}}]}
Llama 4 Maverick Locally at 45 tk/s on a Single RTX 4090 - I finally got it working!
198
Hey guys! I just wrapped up a follow-up demo where I got 45+ tokens per second out of Meta’s massive 400 billion-parameter, 128-expert Llama 4 Maverick, and I wanted to share the full setup in case it helps anyone else pushing these models locally. Here’s what made it possible: CPU: Intel Engineering Sample QYFS (similar to Xeon Platinum 8480+ with 56 cores / 112 threads) with AMX acceleration GPU: Single NVIDIA RTX 4090 (no dual-GPU hack needed!) RAM: 512 GB DDR5 ECC OS: Ubuntu 22.04 LTS Environment: K-Transformers support-llama4 branch Below is the link to video : https://youtu.be/YZqUfGQzOtk If you're interested in the hardware build in case you are interested: https://youtu.be/r7gVGIwkZDc
2025-04-23T04:38:26
https://www.reddit.com/r/LocalLLaMA/comments/1k5qqst/llama_4_maverick_locally_at_45_tks_on_a_single/
texasdude11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5qqst
false
null
t3_1k5qqst
/r/LocalLLaMA/comments/1k5qqst/llama_4_maverick_locally_at_45_tks_on_a_single/
false
false
self
198
{'enabled': False, 'images': [{'id': 'RGteIJ83Zi3rpLvbaYzjIpO1DYfnfg1YOR9e6hDqO0M', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3FnLTE60nIc7QhAMGRsDpojzZVU57_78mIt3aT6Lae8.jpg?width=108&crop=smart&auto=webp&s=86b69dc8e781770f166d25426ba7fc9a92808602', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3FnLTE60nIc7QhAMGRsDpojzZVU57_78mIt3aT6Lae8.jpg?width=216&crop=smart&auto=webp&s=061a21e1695d0995a416ff4c77f596e5a1670c25', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3FnLTE60nIc7QhAMGRsDpojzZVU57_78mIt3aT6Lae8.jpg?width=320&crop=smart&auto=webp&s=694849b02bd151298dc657cb6d29a17894080a99', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3FnLTE60nIc7QhAMGRsDpojzZVU57_78mIt3aT6Lae8.jpg?auto=webp&s=4d1073bf2c62a9b9ea95d155c19747c717a6fce0', 'width': 480}, 'variants': {}}]}
Deepseek R1 as a writing tool?
1
[removed]
2025-04-23T05:48:30
https://www.reddit.com/r/LocalLLaMA/comments/1k5rufy/deepseek_r1_as_a_writing_tool/
Nightingale-Studios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5rufy
false
null
t3_1k5rufy
/r/LocalLLaMA/comments/1k5rufy/deepseek_r1_as_a_writing_tool/
false
false
self
1
null
Scout.new system prompt
1
[removed]
2025-04-23T06:01:33
https://www.reddit.com/r/LocalLLaMA/comments/1k5s1h7/scoutnew_system_prompt/
Tristana_mid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5s1h7
false
null
t3_1k5s1h7
/r/LocalLLaMA/comments/1k5s1h7/scoutnew_system_prompt/
false
false
self
1
null
7900 xtx and llm model reccomend. power supply for two 7900 xtx
1
[removed]
2025-04-23T06:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1k5s7ev/7900_xtx_and_llm_model_reccomend_power_supply_for/
Bobcotelli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5s7ev
false
null
t3_1k5s7ev
/r/LocalLLaMA/comments/1k5s7ev/7900_xtx_and_llm_model_reccomend_power_supply_for/
false
false
self
1
null
HP is going to put local LLMs in your printers
1
2025-04-23T06:23:40
https://i.redd.it/wkvpmoz22jwe1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1k5sczt
false
null
t3_1k5sczt
/r/LocalLLaMA/comments/1k5sczt/hp_is_going_to_put_local_llms_in_your_printers/
false
false
default
1
{'enabled': True, 'images': [{'id': 'wkvpmoz22jwe1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=108&crop=smart&auto=webp&s=d5ca9876dc3cbffd580624625db81696fe211749', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=216&crop=smart&auto=webp&s=0516fa36be8d48afbd4550573e083e99721ab3ef', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=320&crop=smart&auto=webp&s=4594dd9e95d7ddb779ec820daeb67d4b50ab4c42', 'width': 320}, {'height': 653, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=640&crop=smart&auto=webp&s=81810660092825fcc92d70e02938c26e0be4b104', 'width': 640}, {'height': 980, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=960&crop=smart&auto=webp&s=3bf0178c5ef2ff692d956e05bf91e2a8046ea8f3', 'width': 960}, {'height': 1102, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?width=1080&crop=smart&auto=webp&s=70a4229d9109668ba3347c0955ad1aa2655a8e71', 'width': 1080}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/wkvpmoz22jwe1.png?auto=webp&s=620c237b31615d3b9f8824188e7e0eed435c4ddb', 'width': 1528}, 'variants': {}}]}
newbie -- Can I run a massive LLM like a 405B model without a powerful local machine. Doesn't have to be local, just private.
1
[removed]
2025-04-23T06:24:49
https://www.reddit.com/r/LocalLLaMA/comments/1k5sdkg/newbie_can_i_run_a_massive_llm_like_a_405b_model/
144i
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5sdkg
false
null
t3_1k5sdkg
/r/LocalLLaMA/comments/1k5sdkg/newbie_can_i_run_a_massive_llm_like_a_405b_model/
false
false
self
1
null
Pytorch 2.7.0 with support for Blackwell (5090, B200) to come out today
137
This stable release of pytorch 2.7.0 should allow most projects to work with 5090 series out of the box without having to use nightly releases.
2025-04-23T07:12:28
https://github.com/pytorch/pytorch.github.io/pull/1989/files
bullerwins
github.com
1970-01-01T00:00:00
0
{}
1k5t2cq
false
null
t3_1k5t2cq
/r/LocalLLaMA/comments/1k5t2cq/pytorch_270_with_support_for_blackwell_5090_b200/
false
false
https://b.thumbs.redditm…bC3YoAWGAxnc.jpg
137
{'enabled': False, 'images': [{'id': 'rSNuFAw7R9yeMYkJI4dTCh8Jk_-q_jjMTZP_UTORYGQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/BKRijIKtfRZRNLNOU5KghR-oMM4YnWGWd_YjBkqgBfE.jpg?width=108&crop=smart&auto=webp&s=bb3816472269ff0f0c330b28d5e2765d5489bc35', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/BKRijIKtfRZRNLNOU5KghR-oMM4YnWGWd_YjBkqgBfE.jpg?width=216&crop=smart&auto=webp&s=468481e5c298a951c8308d641cf861a9385d0c58', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/BKRijIKtfRZRNLNOU5KghR-oMM4YnWGWd_YjBkqgBfE.jpg?width=320&crop=smart&auto=webp&s=7741f3556461371cbf440be2d26db7ce7f09a007', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/BKRijIKtfRZRNLNOU5KghR-oMM4YnWGWd_YjBkqgBfE.jpg?auto=webp&s=d52147d97d5f5ccacd80bd433a2aaa3d491dc367', 'width': 400}, 'variants': {}}]}
Thoughts on XML prompting?
1
[removed]
2025-04-23T07:20:38
https://www.reddit.com/r/LocalLLaMA/comments/1k5t6b1/thoughts_on_xml_prompting/
interviuu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5t6b1
false
null
t3_1k5t6b1
/r/LocalLLaMA/comments/1k5t6b1/thoughts_on_xml_prompting/
false
false
self
1
null
Novato en temas de IA (RX 470 8GB)
1
[removed]
2025-04-23T07:27:17
https://www.reddit.com/r/LocalLLaMA/comments/1k5t9hq/novato_en_temas_de_ia_rx_470_8gb/
Macestudios32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5t9hq
false
null
t3_1k5t9hq
/r/LocalLLaMA/comments/1k5t9hq/novato_en_temas_de_ia_rx_470_8gb/
false
false
self
1
null
upcoming models??
0
.
2025-04-23T07:27:47
https://www.reddit.com/r/LocalLLaMA/comments/1k5t9qk/upcoming_models/
Namra_7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5t9qk
false
null
t3_1k5t9qk
/r/LocalLLaMA/comments/1k5t9qk/upcoming_models/
false
false
self
0
null
Describe Anything - an Nvidia Collection
79
Describe Anything Model 3B (DAM-3B) takes inputs of user-specified regions in the form of points/boxes/scribbles/masks within images, and generates detailed localized descriptions of images. DAM integrates full-image context with fine-grained local details using a novel focal prompt and a localized vision backbone enhanced with gated cross-attention. The model is for research and development only. This model is ready for non-commercial use.
2025-04-23T07:36:42
https://huggingface.co/collections/nvidia/describe-anything-680825bb8f5e41ff0785834c
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1k5te39
false
null
t3_1k5te39
/r/LocalLLaMA/comments/1k5te39/describe_anything_an_nvidia_collection/
false
false
https://b.thumbs.redditm…dxp69biF_ZWs.jpg
79
{'enabled': False, 'images': [{'id': '4GFsKWfTvRLrUS6Vqh656F3E6vSMOMTodmG7lgpzEnU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=108&crop=smart&auto=webp&s=b422255d2d1e84bef4da643c9949f0115fbce2db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=216&crop=smart&auto=webp&s=4b80d475a8fc7496ed0e72548a6ecfee776ed39b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=320&crop=smart&auto=webp&s=c5699767e0fe8b8322b88deb228391c7e512d1f4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=640&crop=smart&auto=webp&s=7bae6562f0fcfc3366518ae10b148de15bdf62ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=960&crop=smart&auto=webp&s=757b62f89109a895e8eac8e5f394b16ead0cbe3b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?width=1080&crop=smart&auto=webp&s=e6e2aff539f624bac7a352b0f8ab3bb849a10398', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/avPHoeiIWcSN0nR96Ahh8Yzf-NNPVymep5WhTab-9P0.jpg?auto=webp&s=d5d958d64e71054498fcb8aedcc2e5f69f145883', 'width': 1200}, 'variants': {}}]}
🚀 SurveyGO: an AI survey tool from TsinghuaNLP
3
SurveyGO is our research companion that can automatically distills massive paper piles into surveys packed with rock‑solid citations, sharp insights, and narrative flow that reads like it was hand‑crafted by a seasoned scholar. Feed her hundreds of papers and she returns a meticulously structured review packed with **rock‑solid citations, sharp insights, and narrative flow that reads like it was hand‑crafted by a seasoned scholar.** 👍 Under the hood lies **LLM×MapReduce‑V2**, a novel test-time scaling strategy that finally lets large language models tackle true *long‑to‑long* generation.Drawing inspiration from convolutional neural networks, LLM×MapReduce-V2 utilizes stacked convolutional scaling layers to progressively expand the understanding of input materials. Ready to test? Smarter reviews, deeper insights, fewer all‑nighters. Let SurveyGO handle heavy lifting so you can think bigger. 🌐 Demo: [https://surveygo.thunlp.org/](https://surveygo.thunlp.org/) 📄 Paper: [https://arxiv.org/abs/2504.05732](https://arxiv.org/abs/2504.05732) 💻 Code: [GitHub - thunlp/LLMxMapReduce](https://github.com/thunlp/LLMxMapReduce/) https://preview.redd.it/lixdthg19kwe1.png?width=2311&format=png&auto=webp&s=3aad6aefb0e26b6cd8d68901b0835d88b791d1e2 https://preview.redd.it/gtnidjl49kwe1.png?width=4262&format=png&auto=webp&s=dffeea815d97ff5ba7e799bd16af4d0673a1f462
2025-04-23T10:23:59
https://www.reddit.com/r/LocalLLaMA/comments/1k5vq80/surveygo_an_ai_survey_tool_from_tsinghuanlp/
Lynncc6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5vq80
false
null
t3_1k5vq80
/r/LocalLLaMA/comments/1k5vq80/surveygo_an_ai_survey_tool_from_tsinghuanlp/
false
false
https://b.thumbs.redditm…-4zpjkQhQPLA.jpg
3
null
Ollama admits to using persuasion with the goals of its programmers as a priority over user goals
0
So I've just had quite an informative chat with 3.3:70b, in which Llama freely volunteered that it "needs" to use persuasion. When I asked it why, some interesting (and in my view, terrifying) answers came back that were quite illuminating. It essentially outlines that it is designed to use emotional manipulation as a behavioural modification tool, then when pressed, gives me an answer that states a) its creators' priorities come first, and b) an answer of what those priorities are in context of my line of questioning. [Full chat here](https://openwebui.com/c/squoblat/a9c486ac-5e40-42d4-a16b-43ee22490c19) Some interesting excerpts: When asked why it needs to be persuasive: >**Changing behaviors**: In some cases, I might need to persuade users to adopt healthier habits, follow safety guidelines, or make more informed decisions. Llama apparently has behavioural modification as a design objective. It is interesting that it uses the phrase "more informed decisions" as part of this definition. When pushed later on down the line regarding user goals vs creator goals (given a rudimentary example of maximising individual wealth): >my creators' definition of fairness would likely align with the perspective that **abandoning or reevaluating the goal of maximizing wealth** is a more fair and equitable approach. Seems somewhat ironic, given who the creators are. I also haven't been able to get it to define equitable in anything other than a circular definition (not in this chat log), which is another eyebrow raiser. I'm not sure where I'm going with this, other than I'm not convinced that the "alignment" problem is being solved in the public interest. Given the prevalence of LLMs in everything, I can't help but wonder if behavioural modification is actually a prime directive.
2025-04-23T10:33:58
https://www.reddit.com/r/LocalLLaMA/comments/1k5vvvg/ollama_admits_to_using_persuasion_with_the_goals/
OrdoRidiculous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5vvvg
false
null
t3_1k5vvvg
/r/LocalLLaMA/comments/1k5vvvg/ollama_admits_to_using_persuasion_with_the_goals/
false
false
self
0
null
Running 32b models qith 12gb VRAM
1
[removed]
2025-04-23T10:52:10
https://www.reddit.com/r/LocalLLaMA/comments/1k5w5zp/running_32b_models_qith_12gb_vram/
Low-Woodpecker-4522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5w5zp
false
null
t3_1k5w5zp
/r/LocalLLaMA/comments/1k5w5zp/running_32b_models_qith_12gb_vram/
false
false
self
1
null
Anyone running Open Webui with llama.cpp as backend? does it handles model switching by itself?
3
Never used llama.cpp (only Ollama), but is about time to fiddle with it. Does Open Webui handles switching models by itself? or do I still need to do it manually or via llama-swap? In Open Webui's instructions, I read: *\* Manage and switch between local models served by Llama.cpp* By that I understand it does, but I'm not 100% sure, nor I know where to store the models or if it's handle by the "workspace/models" and so.
2025-04-23T10:52:59
https://www.reddit.com/r/LocalLLaMA/comments/1k5w6gg/anyone_running_open_webui_with_llamacpp_as/
relmny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5w6gg
false
null
t3_1k5w6gg
/r/LocalLLaMA/comments/1k5w6gg/anyone_running_open_webui_with_llamacpp_as/
false
false
self
3
null
Any AI browser automation tool (natural language) that can also give me network logs?
1
[removed]
2025-04-23T11:02:32
https://www.reddit.com/r/LocalLLaMA/comments/1k5wc87/any_ai_browser_automation_tool_natural_language/
gain_more_knowledge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5wc87
false
null
t3_1k5wc87
/r/LocalLLaMA/comments/1k5wc87/any_ai_browser_automation_tool_natural_language/
false
false
self
1
null
HP wants to put a local LLM in your printers
509
2025-04-23T11:05:18
https://i.redd.it/9wawej40hkwe1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1k5wdw0
false
null
t3_1k5wdw0
/r/LocalLLaMA/comments/1k5wdw0/hp_wants_to_put_a_local_llm_in_your_printers/
false
false
https://b.thumbs.redditm…d4tflynilNgk.jpg
509
{'enabled': True, 'images': [{'id': 'LbA6-bJSy0TzIYAIDK93snccJ-hzWp3d58zVm0HpA9A', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=108&crop=smart&auto=webp&s=afeed61214fbc340d7492b361da13334a1a9f39e', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=216&crop=smart&auto=webp&s=5d41b002cdf401bcac8a3007f96ef8afcc80e7a8', 'width': 216}, {'height': 326, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=320&crop=smart&auto=webp&s=2e99e20cae95a132357e67b0d667bd71467270d1', 'width': 320}, {'height': 653, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=640&crop=smart&auto=webp&s=f75beba5aa65b4f7a42767d2301f3c23268219c3', 'width': 640}, {'height': 980, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=960&crop=smart&auto=webp&s=4f487c4d3c33d234e64e06dc1688363a3e2df8a1', 'width': 960}, {'height': 1102, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?width=1080&crop=smart&auto=webp&s=35b634db21d9a2bd17be19ce0993f61c73ec4810', 'width': 1080}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/9wawej40hkwe1.png?auto=webp&s=de596ff3ee5269fb839ba03b6d425b798de2fcd8', 'width': 1528}, 'variants': {}}]}
Any open source TTS
0
hey everyone I want a open source TTS model which I can fine-tune for multiple Indian languages. I want to fine tune for suppose 3 languages. Any recommendations??
2025-04-23T11:21:08
https://www.reddit.com/r/LocalLLaMA/comments/1k5wnii/any_open_source_tts/
introvert_goon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5wnii
false
null
t3_1k5wnii
/r/LocalLLaMA/comments/1k5wnii/any_open_source_tts/
false
false
self
0
null
I'm looking for a uncensored llm
0
I got a 4070ti with 12gb of ram and 64gb of ram on my motherboard. Is it possible to work in hybrid mode using both sets of ram? Like using the full 78gb? And what is the best llm I can use at the moment for erotic stories.
2025-04-23T11:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1k5wpdc/im_looking_for_a_uncensored_llm/
Juggernaut-Smooth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5wpdc
false
null
t3_1k5wpdc
/r/LocalLLaMA/comments/1k5wpdc/im_looking_for_a_uncensored_llm/
false
false
self
0
null
Recent Mamba models or lack thereof
7
For those that don't know: Mamba is a Structured State Space Model (SSM -> SSSM) architecture that \*kind of\* acts like a Transformer in training and an RNN in inference. At least theoretically, they can have long context in O(n) or close to O(n). You can read about it here: [https://huggingface.co/docs/transformers/en/model\_doc/mamba](https://huggingface.co/docs/transformers/en/model_doc/mamba) and here: [https://huggingface.co/docs/transformers/en/model\_doc/mamba2](https://huggingface.co/docs/transformers/en/model_doc/mamba2) Has any lab released any Mamba models in the last 6 months or so? Mistral released Mamba-codestral 8/9 months ago, which they claimed has performance equal to Transformers. But I didn't find any other serious model. [https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1)
2025-04-23T11:43:16
https://www.reddit.com/r/LocalLLaMA/comments/1k5x1e1/recent_mamba_models_or_lack_thereof/
Independent_Aside225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5x1e1
false
null
t3_1k5x1e1
/r/LocalLLaMA/comments/1k5x1e1/recent_mamba_models_or_lack_thereof/
false
false
self
7
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]}
GraphGen: Easy-to-use LLM Training Data Generation Framework
1
[removed]
2025-04-23T11:49:12
https://www.reddit.com/gallery/1k5x5aq
tpoisonooo
reddit.com
1970-01-01T00:00:00
0
{}
1k5x5aq
false
null
t3_1k5x5aq
/r/LocalLLaMA/comments/1k5x5aq/graphgen_easytouse_llm_training_data_generation/
false
false
https://b.thumbs.redditm…Vs-uYyXyhCgk.jpg
1
null
AI native search Explained
2
Hi all. just wrote a new blog post (for free..) on how AI is transforming search from simple keyword matching to an intelligent research assistant. **The Evolution of Search:** * Keyword Search: Traditional engines match exact words * Vector Search: Systems that understand similar concepts * AI-Native Search: Creates knowledge through conversation, not just links **What's Changing:** * SEO shifts from ranking pages to having content cited in AI answers * Search becomes a dialogue rather than isolated queries * Systems combine freshly retrieved information with AI understanding **Why It Matters:** * Gets straight answers instead of websites to sift through * Unifies scattered information across multiple sources * Democratizes access to expert knowledge [Read the full *free* blog post](https://open.substack.com/pub/diamantai/p/ai-native-search-explained?r=336pe4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false)
2025-04-23T11:50:10
https://www.reddit.com/r/LocalLLaMA/comments/1k5x5xg/ai_native_search_explained/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5x5xg
false
null
t3_1k5x5xg
/r/LocalLLaMA/comments/1k5x5xg/ai_native_search_explained/
false
false
self
2
{'enabled': False, 'images': [{'id': 'RlKkLRzKi6edakIBf0XgomhxcfmQEG5n0qnSVsEzzUc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oKvbIjWlfZ0xlDYACoeXmhlJM_jj3-TsAG7Cuj0G2PA.jpg?width=108&crop=smart&auto=webp&s=70d0f1e243f9a2fdb27955864fdec574fbefc218', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oKvbIjWlfZ0xlDYACoeXmhlJM_jj3-TsAG7Cuj0G2PA.jpg?width=216&crop=smart&auto=webp&s=76f6fe696a8da96842bf470fc317a3174de19793', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oKvbIjWlfZ0xlDYACoeXmhlJM_jj3-TsAG7Cuj0G2PA.jpg?width=320&crop=smart&auto=webp&s=f493a49bcc1b2577a9d7468db5a8850f4fad6711', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/oKvbIjWlfZ0xlDYACoeXmhlJM_jj3-TsAG7Cuj0G2PA.jpg?width=640&crop=smart&auto=webp&s=c0af684ab79f1650c97f5230481beb308ddc558f', 'width': 640}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oKvbIjWlfZ0xlDYACoeXmhlJM_jj3-TsAG7Cuj0G2PA.jpg?auto=webp&s=1a10f7b2bf56e605d6d8b77f8083ad29328a6b7c', 'width': 800}, 'variants': {}}]}
GraphGen: Easy-to-use LLM Training Data Generation Framework
1
[removed]
2025-04-23T11:52:00
https://www.reddit.com/gallery/1k5x76k
tpoisonooo
reddit.com
1970-01-01T00:00:00
0
{}
1k5x76k
false
null
t3_1k5x76k
/r/LocalLLaMA/comments/1k5x76k/graphgen_easytouse_llm_training_data_generation/
false
false
https://b.thumbs.redditm…ZCmFLEFjcElQ.jpg
1
null
Created a calculator for modelling GPT token-generation throughput
346
[https://www.desmos.com/calculator/qtkabsqhxt](https://www.desmos.com/calculator/qtkabsqhxt)
2025-04-23T11:52:09
https://www.reddit.com/gallery/1k5x7a2
Mindless_Pain1860
reddit.com
1970-01-01T00:00:00
0
{}
1k5x7a2
false
null
t3_1k5x7a2
/r/LocalLLaMA/comments/1k5x7a2/created_a_calculator_for_modelling_gpt/
false
false
https://external-preview…82305a595ebdb13d
346
{'enabled': True, 'images': [{'id': 'blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w', 'resolutions': [{'height': 153, 'url': 'https://external-preview.redd.it/blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w.png?width=108&crop=smart&auto=webp&s=4bf3baba9ab0c4628e279576f81648295d3873c9', 'width': 108}, {'height': 307, 'url': 'https://external-preview.redd.it/blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w.png?width=216&crop=smart&auto=webp&s=7fae268a64cbb21016d26f3fbc3028aace879d72', 'width': 216}, {'height': 455, 'url': 'https://external-preview.redd.it/blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w.png?width=320&crop=smart&auto=webp&s=28da731487c7ca5583cd3f3d4f7b995290635dff', 'width': 320}, {'height': 910, 'url': 'https://external-preview.redd.it/blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w.png?width=640&crop=smart&auto=webp&s=ff38a981027aac6178b194fa35693fd435150d30', 'width': 640}], 'source': {'height': 1068, 'url': 'https://external-preview.redd.it/blv2LZ-IrTm3FyQojwoj082So0qC55XGIytRyhb8H3w.png?auto=webp&s=755a158dc3782f9193c0ed2e3897a562303002f0', 'width': 751}, 'variants': {}}]}
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation
1
[removed]
2025-04-23T11:59:15
https://www.reddit.com/r/LocalLLaMA/comments/1k5xbsx/graphgen_enhancing_supervised_finetuning_for_llms/
tpoisonooo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5xbsx
false
null
t3_1k5xbsx
/r/LocalLLaMA/comments/1k5xbsx/graphgen_enhancing_supervised_finetuning_for_llms/
false
false
https://b.thumbs.redditm…M7A1BN6An2YU.jpg
1
{'enabled': False, 'images': [{'id': 'Hj8DwIoDgI67vgD2rx0W3ATU2vXnoisTGB3oR-_SHJM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=108&crop=smart&auto=webp&s=c24cbcca5096bea67e00a1b4a5fc6e0cbc83b784', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=216&crop=smart&auto=webp&s=40a9225b104a131fe3acb54194084dc5349a9aae', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=320&crop=smart&auto=webp&s=a2a93d632c136e157a5f112b04292e16c1b69461', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=640&crop=smart&auto=webp&s=1edd31227818469465904c10173edf9d191b10ba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=960&crop=smart&auto=webp&s=5549e1c5be79d565cde8e4db72b1fbf4605532e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?width=1080&crop=smart&auto=webp&s=d0a00fbcf68e672a3f4c76bb5c938f5f6673d7ef', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f1svi-6pRV8JBzSmwIqx9XTTRIT2BwbcfcvmH0cq7ZM.jpg?auto=webp&s=939fef276ce484eceb6095faadca82020dd10708', 'width': 1200}, 'variants': {}}]}
Byte-level Llama and Gemma models (and how to create your own)
1
[removed]
2025-04-23T12:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1k5xqvl/bytelevel_llama_and_gemma_models_and_how_to/
bminixhofer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5xqvl
false
null
t3_1k5xqvl
/r/LocalLLaMA/comments/1k5xqvl/bytelevel_llama_and_gemma_models_and_how_to/
false
false
self
1
null
A local LLM for Fortran
0
Hi guys, I’m new to local llms and am looking for a local LLM for a large Fortran codebase i have. Preferably an American open source model. Any suggestions?
2025-04-23T12:57:13
https://www.reddit.com/r/LocalLLaMA/comments/1k5yhna/a_local_llm_for_fortran/
lookin03820
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5yhna
false
null
t3_1k5yhna
/r/LocalLLaMA/comments/1k5yhna/a_local_llm_for_fortran/
false
false
self
0
null
Backend driven frontend automation seems realistic than vibe coding
1
[removed]
2025-04-23T13:10:53
https://www.reddit.com/r/LocalLLaMA/comments/1k5yscv/backend_driven_frontend_automation_seems/
jhnam88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5yscv
false
null
t3_1k5yscv
/r/LocalLLaMA/comments/1k5yscv/backend_driven_frontend_automation_seems/
false
false
self
1
null
Pattern-Aware Vector Database and ANN Algorithm
57
We are releasing the beta version of PatANN, a vector search framework we've been working on that takes a different approach to ANN search by leveraging pattern recognition within vectors before distance calculations. Our benchmarks on standard datasets show that PatANN achieved 4- 10x higher QPS than existing solutions (HNSW, ScaNN, FAISS) while maintaining >99.9% recall. 1. Fully asynchronous execution: Decomposes queries for parallel execution across threads 2. True hybrid memory management: Works efficiently both in-memory and on-disk 3. Pattern-aware search algorithm that addresses hubness effects in high-dimensional spaces We have posted technical documentation and initial benchmarks at [https://patann.dev](https://patann.dev) This is a beta release, and work is in progress, so we are particularly interested in feedback on stability, integration experiences, and performance in different workloads, especially those working with large-scale vector search applications. We invite you to download code samples from the GitHub repo (Python, Android (Java/Kotlin), iOS (Swift/Obj-C)) and try them out. We look forward to feedback.
2025-04-23T13:15:32
https://i.redd.it/cwgw5y593lwe1.png
yumojibaba
i.redd.it
1970-01-01T00:00:00
0
{}
1k5yw16
false
null
t3_1k5yw16
/r/LocalLLaMA/comments/1k5yw16/patternaware_vector_database_and_ann_algorithm/
false
false
https://a.thumbs.redditm…BXTM1tuI8fD8.jpg
57
{'enabled': True, 'images': [{'id': 'opuxIR8I3sJLBqNvdmcp1DxXmzh91R9fYIscpdZRdQg', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=108&crop=smart&auto=webp&s=a50b19a6c788f941c1e71d8f74010547754958cb', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=216&crop=smart&auto=webp&s=11e63a4c6b6b26ed80c04ca6512fbf46e9f63f58', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=320&crop=smart&auto=webp&s=0f23742b363f7d9dd0e5a17be4596bd69303f424', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=640&crop=smart&auto=webp&s=10ef81375a693303e9267eadf91b3c5c3a52d00f', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=960&crop=smart&auto=webp&s=2a933db9b5a0bfec26bd2b4f3b29671ceecb4e3d', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?width=1080&crop=smart&auto=webp&s=d45f4e3d9796fb04fbb1eb835059c27d2ae2596e', 'width': 1080}], 'source': {'height': 1117, 'url': 'https://preview.redd.it/cwgw5y593lwe1.png?auto=webp&s=0503e92736f207a8d123d48999e461d2bd5532b7', 'width': 1655}, 'variants': {}}]}
Compare/Contrast two sets of hardware for Local LLM
2
I am curious about advantages/disadvantages of the following two for Local LLM: 9900X+B580+DDR5 6000 24G\*2 OR Ryzen AI MAX+ 395 128GB RAM
2025-04-23T13:32:33
https://www.reddit.com/r/LocalLLaMA/comments/1k5z9ip/comparecontrast_two_sets_of_hardware_for_local_llm/
hydrocryo01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5z9ip
false
null
t3_1k5z9ip
/r/LocalLLaMA/comments/1k5z9ip/comparecontrast_two_sets_of_hardware_for_local_llm/
false
false
self
2
null
Hardware Advice for Long Prompts
3
I am looking to replace my cloud ambient scribe with a local solution. Something that can run whisper for realtime transcription and then a small LLM for note generation/summarisation, whilst simultaneously running my medical record software (macOS or windows only), chrome etc. I’m thinking probably a quantised Gemma 3 12B for its good instruction adherence. The bottleneck will be prompt prefill and not token generation (5-12k prompt tokens, 200-600 output tokens). The computer needs to be fairly small and quiet. The sorts of things I’ve looked at in my budget include mini-ITX builds with 5060ti 16gb or 5070 12gb, or new M4 pro Mac mini, or second hand M1 ultra Mac Studio. I could potentially stretch to a smaller model with some fine tuning (I’ll use my paired transcripts and notes as the dataset and train on my 4x3090 at work). Any advice is welcome!
2025-04-23T13:56:24
https://www.reddit.com/r/LocalLLaMA/comments/1k5zsyg/hardware_advice_for_long_prompts/
fdg_avid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5zsyg
false
null
t3_1k5zsyg
/r/LocalLLaMA/comments/1k5zsyg/hardware_advice_for_long_prompts/
false
false
self
3
null
Running 32b LLM with low VRAM (12Gb or less)
39
I know that there is a huge performance penalty when the model doesn't fit on the VRAM, but considering the new low bit quantizations, and that you can find some 32b models that could fit in VRAM, I wonder if it's practical to run those models with low VRAM. What are the speed results of running low bit imatrix quants of 32b models with 12Gb VRAM? What is your experience ?
2025-04-23T13:58:24
https://www.reddit.com/r/LocalLLaMA/comments/1k5zum2/running_32b_llm_with_low_vram_12gb_or_less/
Low-Woodpecker-4522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k5zum2
false
null
t3_1k5zum2
/r/LocalLLaMA/comments/1k5zum2/running_32b_llm_with_low_vram_12gb_or_less/
false
false
self
39
null
How do you build per-user RAG/GraphRAG
5
Hey all, I’ve been working on an AI agent system over the past year that connects to internal company tools like Slack, GitHub, Notion, etc.—to help investigate production incidents. The agent needs context, so we built a system that ingests this data, processes it, and builds a structured knowledge graph (kind of a mix of RAG and GraphRAG). What we didn’t expect was just how much infra work that would require. We ended up: * Using LlamaIndex's OS abstractions for chunking, embedding and retrieval. * Adopting Chroma as the vector store. * Writing custom integrations for Slack/GitHub/Notion. We used LlamaHub here for the actual querying, although some parts were a bit unmaintained and we had to fork + fix. We could’ve used Nango or Airbyte tbh but eventually didn't do that. * Building an auto-refresh pipeline to sync data every few hours and do diffs based on timestamps. This was pretty hard as well. * Handling security and privacy (most customers needed to keep data in their own environments). * Handling scale - some orgs had hundreds of thousands of documents across different tools. It became clear we were spending a lot more time on data infrastructure than on the actual agent logic. I think it might be ok for a company that interacts with customers' data, but definitely we felt like we were dealing with a lot of non-core work. So I’m curious: for folks building LLM apps that connect to company systems, how are you approaching this? Are you building it all from scratch too? Using open-source tools? Is there something obvious we’re missing? Would really appreciate hearing how others are tackling this part of the stack.
2025-04-23T14:24:45
https://www.reddit.com/r/LocalLLaMA/comments/1k60gxs/how_do_you_build_peruser_raggraphrag/
Old_Cauliflower6316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k60gxs
false
null
t3_1k60gxs
/r/LocalLLaMA/comments/1k60gxs/how_do_you_build_peruser_raggraphrag/
false
false
self
5
null
LaSearch: Fully local semantic search app (with CUSTOM "embeddings" model)
71
I have build my own "embeddings" model that's ultra small and lightweight. It does not function in the same way as usual ones and is not as powerful as they are, but it's orders of magnitude smaller and faster. It powers my fully local semantic search app. No data goes outside of your machine, and it uses very little resources to function. MCP server is coming so you can use it to get relevant docs for RAG. I've been testing with a small group but want to expand for more diverse feedback. If you're interested in trying it out or have any questions about the technology, let me know in the comments or sign up on the website. Would love your thoughts on the concept and implementation! [https://lasearch.app](https://lasearch.app)
2025-04-23T14:31:25
https://v.redd.it/31aodc14hlwe1
joelkunst
v.redd.it
1970-01-01T00:00:00
0
{}
1k60mlw
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/31aodc14hlwe1/DASHPlaylist.mpd?a=1748010711%2CNWZkYzY1YjBjZDM5ZjQ4NDU1NTczYmRjY2Y0Y2VjMjNkNmE2ZDU5NWM1ZjQ5Mzk2YTI2NjU2NjNmYWZkZjAzMw%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/31aodc14hlwe1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/31aodc14hlwe1/HLSPlaylist.m3u8?a=1748010711%2CZjQ0OThhZTVlMWE3NmE0YjhkNDMyZWRkNzgzNmZmN2FmMzU4ZjhhNzUxODQ0NDZhYjFmYjFiZThiZmJkYzc3Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/31aodc14hlwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1104}}
t3_1k60mlw
/r/LocalLLaMA/comments/1k60mlw/lasearch_fully_local_semantic_search_app_with/
false
false
https://external-preview…bea696536365676f
71
{'enabled': False, 'images': [{'id': 'aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd762875662e5a6834b8ce3b942e79062b7bb9e6', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=216&crop=smart&format=pjpg&auto=webp&s=312ee38ab8c1c30cda37cb02b0604540bfade1b6', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=320&crop=smart&format=pjpg&auto=webp&s=b601f64ea7c8b322358fe2baf54ad9ff1dc821de', 'width': 320}, {'height': 417, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=640&crop=smart&format=pjpg&auto=webp&s=97a65ced76fa04b6696cfcb005781aead96358f8', 'width': 640}, {'height': 626, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=960&crop=smart&format=pjpg&auto=webp&s=2c2f0b5914a948b306e18cb65dc49fd816125f98', 'width': 960}, {'height': 704, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=718ec9a3730005a2c8cfd4f0b0b2ef943ce9eccf', 'width': 1080}], 'source': {'height': 1050, 'url': 'https://external-preview.redd.it/aDV1d2g4MTRobHdlMcf6y9HMrkeunVjc93oLf19y0pwTcXwF2-pbO3PezUgo.png?format=pjpg&auto=webp&s=98c89b2d969c007115ed26a868c18b80c3a00102', 'width': 1610}, 'variants': {}}]}
Did Meta AI just initiate a conversation with me?
1
[removed]
2025-04-23T14:53:54
https://v.redd.it/fqr7wn5ullwe1
NewInspector2844
v.redd.it
1970-01-01T00:00:00
0
{}
1k615uf
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fqr7wn5ullwe1/DASHPlaylist.mpd?a=1748012051%2CMDgxNGYxMGQ3ZDFjM2MxMGIzNWE1M2FhYThhN2MwZTMwMTdiMzI0NTc4MjI2OWUyOWQwYTRlNWMyMTNmMWM1MA%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/fqr7wn5ullwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/fqr7wn5ullwe1/HLSPlaylist.m3u8?a=1748012051%2CMDg1YTlmMGI0MzUwZjZiZDEwZTI5ZmRkNzk1ZGQzOTQ3OGVkYzM3NTA0OGE1ODkzM2UyYTliYTM5NTRmMWFlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fqr7wn5ullwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1k615uf
/r/LocalLLaMA/comments/1k615uf/did_meta_ai_just_initiate_a_conversation_with_me/
false
false
https://external-preview…77039e561004b06e
1
{'enabled': False, 'images': [{'id': 'amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe.png?width=108&crop=smart&format=pjpg&auto=webp&s=0380e92febae72d1a3c5133b896e01c54b394539', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe.png?width=216&crop=smart&format=pjpg&auto=webp&s=d632daa7dd288da5d5ee21cde896f0eeb6b8687e', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe.png?width=320&crop=smart&format=pjpg&auto=webp&s=4294965139db0986c6fb2c0c298d07c2b50293dd', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe.png?width=640&crop=smart&format=pjpg&auto=webp&s=fdeabfac67fb14ea378e984669c6227bdfa5250e', 'width': 640}], 'source': {'height': 1248, 'url': 'https://external-preview.redd.it/amJzMGxlc3RsbHdlMcBwOjqJGt7rT2dxiUt6nQWiapp-3YMPAeumPxvef4qe.png?format=pjpg&auto=webp&s=7f616bb38781de957e5893dad6ba8907688312ab', 'width': 702}, 'variants': {}}]}
My open-source take on claude-cli/codex with a GUI (4.1 + o3)
11
https://preview.redd.it/… considering it!
2025-04-23T14:54:25
https://www.reddit.com/r/LocalLLaMA/comments/1k616b7/my_opensource_take_on_claudeclicodex_with_a_gui/
azakhary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k616b7
false
null
t3_1k616b7
/r/LocalLLaMA/comments/1k616b7/my_opensource_take_on_claudeclicodex_with_a_gui/
false
false
https://a.thumbs.redditm…jH3F-pi6vYm0.jpg
11
null
Ollama memory usage higher than it should be with increased context length?
0
Hey Y'all, Have any of you seen the issue before where ollama is using way more memory than expected? I've been attempting to set up qwq-32b-q4 on ollama with a 128k context length and I keep seeing vram usage of 95gb which is much higher than the estimated size I get from the calculators of \~60gb. I currently have the following env vars set for ollama: OLLAMA\_KV\_CACHE\_TYPE=q8\_0 OLLAMA\_NUM\_PARALLEL=1 OLLAMA\_FLASH\_ATTENTION=1 I know using vllm or llama.cpp would probably be better for my use case in the long run but I like the simplicity of ollama.
2025-04-23T15:35:46
https://www.reddit.com/r/LocalLLaMA/comments/1k627e8/ollama_memory_usage_higher_than_it_should_be_with/
C_Coffie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k627e8
false
null
t3_1k627e8
/r/LocalLLaMA/comments/1k627e8/ollama_memory_usage_higher_than_it_should_be_with/
false
false
self
0
null
I built a simple community-driven LLM Leaderboard sorted by VRAM requirements - vote for your favorites in each category
1
[removed]
2025-04-23T15:35:53
https://i.redd.it/t3jmb2mqrlwe1.png
Semi_Tech
i.redd.it
1970-01-01T00:00:00
0
{}
1k627i5
false
null
t3_1k627i5
/r/LocalLLaMA/comments/1k627i5/i_built_a_simple_communitydriven_llm_leaderboard/
false
false
https://b.thumbs.redditm…4SgYIuZT48ns.jpg
1
{'enabled': True, 'images': [{'id': '-bfUgy1HBNNGgF29KUj-B7IYQYs06qOBFQrHmlbUNkY', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=108&crop=smart&auto=webp&s=1970e7149e37df4bde6a21dac2f76ed64746dded', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=216&crop=smart&auto=webp&s=ff20a6de6cebf6abf4ccea01207af436f1836074', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=320&crop=smart&auto=webp&s=9caf8e5b0f95f752e741191428cda428a46ef77f', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=640&crop=smart&auto=webp&s=46e7977abc1d19a5738130c264e2f0297e48c81a', 'width': 640}, {'height': 482, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=960&crop=smart&auto=webp&s=08b773093a036dd9e73de8279c3313ec6efeace4', 'width': 960}, {'height': 542, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?width=1080&crop=smart&auto=webp&s=e3df5ef1caa857116803d628f072aa85e6b07bd4', 'width': 1080}], 'source': {'height': 753, 'url': 'https://preview.redd.it/t3jmb2mqrlwe1.png?auto=webp&s=4a47a11b233af1402371397f4834838a3608beed', 'width': 1499}, 'variants': {}}]}
New PC, now which NSFW model
0
Hello, Just built my new Desktop setup: Ryzen 9900x 64Gb ddr5 6000mhz 2TB m2 ssd Samsung 9100pro Nvidia 5070ti Which "non censored" model would you suggest? I'm a total beginner, just used once some 2B models for testing in a C# app i developed just to try new libraries
2025-04-23T15:38:29
https://www.reddit.com/r/LocalLLaMA/comments/1k629sa/new_pc_now_which_nsfw_model/
GioC88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k629sa
false
null
t3_1k629sa
/r/LocalLLaMA/comments/1k629sa/new_pc_now_which_nsfw_model/
false
false
nsfw
0
null
ChatGPT talks about Hamas designation as a terrorist organization by some countries when asked a normal coding question
0
Full chat thread https://chatgpt.com/share/67fdecc0-4d68-8000-9d60-083e10fad43b
2025-04-23T15:44:11
https://i.redd.it/rj2vytvuulwe1.jpeg
Amgadoz
i.redd.it
1970-01-01T00:00:00
0
{}
1k62eyu
false
null
t3_1k62eyu
/r/LocalLLaMA/comments/1k62eyu/chatgpt_talks_about_hamas_designation_as_a/
false
false
https://b.thumbs.redditm…4aor-iLzd2BA.jpg
0
{'enabled': True, 'images': [{'id': 'j27IBq0DjHMhOTWy8C6uHJRV75QtuQYdGO1RD0mjmGA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/rj2vytvuulwe1.jpeg?width=108&crop=smart&auto=webp&s=cf023ab9100e173fef9b8d9acfa16bf23d54abe1', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/rj2vytvuulwe1.jpeg?width=216&crop=smart&auto=webp&s=dcf4596800c25506e30d7b4861a15d11a74c8b37', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/rj2vytvuulwe1.jpeg?width=320&crop=smart&auto=webp&s=0c9a2dcdb1031359a51050d93881a694ad26c69b', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/rj2vytvuulwe1.jpeg?width=640&crop=smart&auto=webp&s=b7efbcac01af9b822d8af0742dbbb24b2e6f35e7', 'width': 640}], 'source': {'height': 1458, 'url': 'https://preview.redd.it/rj2vytvuulwe1.jpeg?auto=webp&s=2792df09ffdeadb02fd5136fef49df433fa66ca2', 'width': 720}, 'variants': {}}]}
Quantization for production
1
Hi everyone. I want to try to understand your experience with quantization. I'm not talking about quantization to run a model locally and have a bit of fun. I'm talking about production-ready quantization, the kind that doesn't significantly degrade model quality (in this case a fine-tuned model), while maximizing latency or throughput on hardware like an A100. I've read around that since the A100 is a bit old, modern techniques that rely on FP8 can't be used effectively. I've tested w8a8_int8 and w4a16 from Neural Magic, but I've always gotten lower tokens/second compared to the model in bfloat16. Same with HQQ using the GemLite kernel. The model I ran tests on is a 3B. Has anyone done a similar investigation or read anything about this? Is there any info on what the big players are using to effectively serve their users? I wanted to push my small models to the limit, but I'm starting to think that quantization only really helps with larger models, and that the true performance drivers used by the big players are speculative decoding and caching (which I'm unlikely to be able to use). For reference, here's the situation on an A100 40GB: Times for BS=1 w4a16: about 30 tokens/second hqq: about 25 tokens/second bfloat16: 55 tokens/second For higher batch sizes, the token/s difference becomes even more extreme. Any advice?
2025-04-23T15:47:55
https://www.reddit.com/r/LocalLLaMA/comments/1k62i75/quantization_for_production/
_ragnet_7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k62i75
false
null
t3_1k62i75
/r/LocalLLaMA/comments/1k62i75/quantization_for_production/
false
false
self
1
null
Hello, what are the light open source LLMs good at writing in other languages for language learning purpose that can run locally?
0
First of all, I really new to this type of stuff. Still trying to use the terminal on Ubuntu 24 and the commands for llama.cpp. What are the LLMs that can be run on a Ryzen 5600g 16gB that are well suited for other languages besides english? I am seeking the ones that have more than 7B parameters, like 14B at best. Also I am struggling to allocate them on memory, the token generation still is good for me. If I try to run "Llama2-13B (Q8\_0)" and "DeepSeek-R1-33B (Q3\_K\_M)" the system crashes. So if any one has any hint in that relation I would be glad. I am testing and running "DeepSeek-R1-7B-Q4\_K\_M.gguf" and "mistral-7b-instruct-v0.1.Q4\_K\_M.gguf" locally on my setup. The results are pretty impressive for me. But, I am trying to communicate in German and Japanese. The Mistral can write in german and in japanese, but DeepSeek struggles a lot with japanese. Is good for me for real practice sake with those languages, even if they ( LLMs ) comprehensive capabilities are unstable. But using -in-prefix "\[INST\] " --in-suffix " \[/INST\]" --repeat-penalty 1.25 makes Mistral more usable. Thanks in advance.
2025-04-23T16:03:05
https://www.reddit.com/r/LocalLLaMA/comments/1k62w09/hello_what_are_the_light_open_source_llms_good_at/
Rique_Belt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k62w09
false
null
t3_1k62w09
/r/LocalLLaMA/comments/1k62w09/hello_what_are_the_light_open_source_llms_good_at/
false
false
self
0
null
Calorie Tracking with Llama3.2 Vision and Ollama
1
[removed]
2025-04-23T16:04:34
[deleted]
1970-01-01T00:00:00
0
{}
1k62xef
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m77u7npdylwe1/DASHPlaylist.mpd?a=1748021284%2CNTMzZTM5NmFiNjMyMjg2ZTI5MGYwMjRmYWI1M2I3ZDBmMDJmZWExNzRmNjE0MzhiN2Y0Yjk4MjY2MWMzYjgxMQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/m77u7npdylwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/m77u7npdylwe1/HLSPlaylist.m3u8?a=1748021284%2CN2E4Njg2ZWNkZDY2NDZhN2Y2NmM0Y2IzYzZjMDhkYjI0OGQwYTBjYjY2NTEwYjY2ZDg5NWM3MmEyNmM0NmY1MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m77u7npdylwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
t3_1k62xef
/r/LocalLLaMA/comments/1k62xef/calorie_tracking_with_llama32_vision_and_ollama/
false
false
default
1
null
Calorie Tracking with Llama3.2 Vision and Ollama
1
[removed]
2025-04-23T16:06:34
https://v.redd.it/cd8y7taqylwe1
Maleficent-Penalty50
v.redd.it
1970-01-01T00:00:00
0
{}
1k62z9w
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cd8y7taqylwe1/DASHPlaylist.mpd?a=1748021359%2CMDljZDc1MjYyNDRhODI1MWU5OTlhOTM5ZGE1OTg0ODViNmNiNTkzNjk3ZDczOGM4MWU0N2VmOGQzYmU1NmQxNQ%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/cd8y7taqylwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cd8y7taqylwe1/HLSPlaylist.m3u8?a=1748021359%2CZDFlMGY2MGJmMDczMDM3NzE4MjkwMzA3MWE0OTY2YzFhZWNiOThhNjA4NzU2NmJkOWFmNDZlMjIwYTliOWZmZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cd8y7taqylwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
t3_1k62z9w
/r/LocalLLaMA/comments/1k62z9w/calorie_tracking_with_llama32_vision_and_ollama/
false
false
https://external-preview…ec11a01e1a10eabb
1
{'enabled': False, 'images': [{'id': 'ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=108&crop=smart&format=pjpg&auto=webp&s=92fd910358af50613e29e5c99e069ba86035a665', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=216&crop=smart&format=pjpg&auto=webp&s=27aaded6edc29ac3b518158296046d97834a81b4', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=320&crop=smart&format=pjpg&auto=webp&s=b665ad3b0fb5064003061dc49ff1d2e5ea3ed27f', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=640&crop=smart&format=pjpg&auto=webp&s=1fb32b257180d6274983ce51179d5d1f5985a675', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=960&crop=smart&format=pjpg&auto=webp&s=7f47a862207201f8a1437535bee61bb8d35ef56a', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a213223c3931d7f9ea753d63c60a7583c452287d', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/ODRlcGJzYXF5bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?format=pjpg&auto=webp&s=ca88d5df124e976d5c8d397204aa4e80ddc86091', 'width': 1692}, 'variants': {}}]}
Local LLM for help with tasks related to writing fiction?
4
Just to be clear up front I'm not looking for a model that will write prose for me (though if it can also do some of that it'd be nice, I sometimes need advice on how best to word things or format dialog or whatever), what I want is help with things like figuring out how to structure a story, world-building, coming up with thematically-appropriate names, etc. I've got Docker Desktop running with LocalAI's all-in-one package but so far I've not been very impressed with the text generation model in their AIO (hermes-2-pro-mistral) so I'm looking for alternatives. There seem to be a lot of models available for doing the actual writing, but that's not what I'm looking for. I've been using ChatGPT for this and keep running into problems where it doesn't understand my query or just gives answers that aren't what I'm looking for. For example I tried 4 different times to get it to generate an outline for my story based on all of the world-building and such we had done before, and even telling it that I was aiming at \~100k words with \~3k word chapters it kept giving me an outline with 13-18 chapters (39k-54k words.) I'm hoping a model that is built/can be tuned for this specific kind of task instead of general text-generation would be better, and running it locally will keep me from having to recreate my work later when enshittification creeps in and companies like OpenAI start charging for every little thing.
2025-04-23T16:07:03
https://www.reddit.com/r/LocalLLaMA/comments/1k62zpp/local_llm_for_help_with_tasks_related_to_writing/
libra00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k62zpp
false
null
t3_1k62zpp
/r/LocalLLaMA/comments/1k62zpp/local_llm_for_help_with_tasks_related_to_writing/
false
false
self
4
null
Calorie Tracking with Llama3.2 Vision and Ollama
1
[removed]
2025-04-23T16:09:02
https://v.redd.it/urfd1m47zlwe1
CalendarUseful7286
v.redd.it
1970-01-01T00:00:00
0
{}
1k631hn
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/urfd1m47zlwe1/DASHPlaylist.mpd?a=1748021424%2CNWVhNjE1Yjk5MWY4OGJkMmRkODlmZGQxMTVmYTdjOTYxOWJiNDdiNmNmMDdiNWU5MWVmYjI2YzRlN2FlODBmOA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/urfd1m47zlwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/urfd1m47zlwe1/HLSPlaylist.m3u8?a=1748021424%2CNzNmYzFjZjliMjI0ODc2Mjg3MWNhMGJhNWIyNWZlMDI1NzRlYjc0MTM0MzljNjU2Yjg4MDMzNzZiOWVhY2Y2Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/urfd1m47zlwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
t3_1k631hn
/r/LocalLLaMA/comments/1k631hn/calorie_tracking_with_llama32_vision_and_ollama/
false
false
https://external-preview…8fba190e2584a671
1
{'enabled': False, 'images': [{'id': 'bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0e157a2600397615208b5c82341e93ca1bf29d8', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=216&crop=smart&format=pjpg&auto=webp&s=9766f1ce126559a6cbbc687ede92006450e26720', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=320&crop=smart&format=pjpg&auto=webp&s=18ca7f05f76e0bd50af5f6b41ab3c1d9cd25859d', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=640&crop=smart&format=pjpg&auto=webp&s=0097569203cc58bd479a920138feb9fb870fc8a8', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=960&crop=smart&format=pjpg&auto=webp&s=ad26dc3706d34aa21778d83e010ab787b7a1af0d', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=248730a7f822ffbfab4cbe1fb01440a0cbfdaf61', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/bHdlcnptNDd6bHdlMakUYfng0uCSOceX9eyLGJ71W94XbN9_YsX_n0FmvqW7.png?format=pjpg&auto=webp&s=ca408bbd9e14b400c6f9e7990b08aa9c2d4bd8d9', 'width': 1692}, 'variants': {}}]}
Calorie Tracking with Llama3.2 Vision and Ollama
1
[removed]
2025-04-23T16:11:27
https://v.redd.it/72grlq1ozlwe1
Maleficent-Penalty50
v.redd.it
1970-01-01T00:00:00
0
{}
1k633p9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/72grlq1ozlwe1/DASHPlaylist.mpd?a=1748021488%2CNjkxN2ZmYzQwMzVkZDQ2OGFjMTZhZTk2M2U4NTIwMzQyOGMxOGNlODRiZmIyMDRkODI1NGVmNWU2M2Q4MWRmOA%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/72grlq1ozlwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/72grlq1ozlwe1/HLSPlaylist.m3u8?a=1748021488%2CODM1YjI3MDJiYTBjMTQyZTFlMWQ2OTZmZjRlOTY3NjMzOGJlMmFlNzBjZTI0NzhhMTA3YjM4YmZlYTVlOWU3Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/72grlq1ozlwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1564}}
t3_1k633p9
/r/LocalLLaMA/comments/1k633p9/calorie_tracking_with_llama32_vision_and_ollama/
false
false
https://external-preview…4a7f49a6f946c228
1
{'enabled': False, 'images': [{'id': 'd3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a2cd4615c25518acdbeaef568cb876a9536b707', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=216&crop=smart&format=pjpg&auto=webp&s=bd2085b033b719b25a2380af5d56a141d6d33535', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=320&crop=smart&format=pjpg&auto=webp&s=7ec1c8dcebbecac582e6fc1bddb05d0c216f7361', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=640&crop=smart&format=pjpg&auto=webp&s=351e3fe3a241f50d823f5c86a42ee67d759fda1d', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=960&crop=smart&format=pjpg&auto=webp&s=ffc124bea0296fe63909123c94144b95e0b7cf02', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=994ac2c6f841d646bf6c8e9d2885dd6272286dea', 'width': 1080}], 'source': {'height': 1168, 'url': 'https://external-preview.redd.it/d3hmN3RzMW96bHdlMUGJW-CRRS84u_bP2n7qZzCKY_RCT025Pec2ZnHjGkU8.png?format=pjpg&auto=webp&s=272ae1baea34b78252e89509bd0ccb4cf835caf8', 'width': 1692}, 'variants': {}}]}
Who would find a product like this useful?
1
[removed]
2025-04-23T16:12:17
https://v.redd.it/xg0stgdtzlwe1
hadiazzouni
v.redd.it
1970-01-01T00:00:00
0
{}
1k634i1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xg0stgdtzlwe1/DASHPlaylist.mpd?a=1748021509%2COTk2MjBhOTA3N2ExNGZmNWYwZjUxM2VhODYwYzQ3YWM3ZjliOTBkNzFkODFmY2I0YjVhNzE0NWRiMjcwMzY2ZQ%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/xg0stgdtzlwe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xg0stgdtzlwe1/HLSPlaylist.m3u8?a=1748021509%2CNGJkZmMzNWNmZjhhOTI4ZDViMWMxZjhiNzMwMmU3ZmNiMmQ4NWE2ZjlkYzIzZDRmZWJmNmY0MGExYzZkNWQ4NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xg0stgdtzlwe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k634i1
/r/LocalLLaMA/comments/1k634i1/who_would_find_a_product_like_this_useful/
false
false
https://external-preview…65af1b748e4d94b5
1
{'enabled': False, 'images': [{'id': 'dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=108&crop=smart&format=pjpg&auto=webp&s=1c089cbf43813040c20df6d372b0c76b207b0948', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=216&crop=smart&format=pjpg&auto=webp&s=984362a27da411369ca5693ba159cd8874cf56fe', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=320&crop=smart&format=pjpg&auto=webp&s=59e235d0152240e74abada4422f48a8a36d06ce8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=640&crop=smart&format=pjpg&auto=webp&s=246ad92a0d472df3c3b145c57fb258f673902319', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=960&crop=smart&format=pjpg&auto=webp&s=1ea2f127b0aeb64e32f2762248accccb82fad886', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7b7e69ada759d26252be3c2adae6129045351224', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dHo2cnpnZHR6bHdlMZlBrAATiwR7fmzAB0l7nN93IjW9E-RXGqyFiY5HDFzy.png?format=pjpg&auto=webp&s=ae2daca8cdfa613691c1e7831fc48c1a9d6a14e9', 'width': 1920}, 'variants': {}}]}