title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Llama 4 smells bad
0
Here is the story of Llama 4 so far, including the LM Arena drama.
2025-04-17T21:43:03
https://fastml.com/llama-4-smells-bad/
Foxtr0t
fastml.com
1970-01-01T00:00:00
0
{}
1k1okri
false
null
t3_1k1okri
/r/LocalLLaMA/comments/1k1okri/llama_4_smells_bad/
false
false
default
0
null
Multi node/ cluster here at home
2
Want to build a multi-node cluster to play with some of the extensibilities across multiple gpus and I want this cluster to be networked together, not some of the local physically co-located high speed interfaces that exist. Curious if anyone has this kind of hardware setup in their house and maybe some tips or tutorials that they've looked at in terms of the hardware and software stack.
2025-04-17T21:46:18
https://www.reddit.com/r/LocalLLaMA/comments/1k1onc2/multi_node_cluster_here_at_home/
onemoreburrito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1onc2
false
null
t3_1k1onc2
/r/LocalLLaMA/comments/1k1onc2/multi_node_cluster_here_at_home/
false
false
self
2
null
Evaluating a LLM API that conforms to OpenAI specification
1
I’m looking for a library that will hit an API that has the same format as an OpenAI API but is locally hosted or hosted on a server on my same network. I’m looking at either LightEval or Lm-evaluation-harness, but it doesn’t seem like either support this, especially if the server doesn’t have an equivalent model pushed to HuggingFace. Open to any suggestions.
2025-04-17T21:50:05
https://www.reddit.com/r/LocalLLaMA/comments/1k1oqal/evaluating_a_llm_api_that_conforms_to_openai/
larenspear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1oqal
false
null
t3_1k1oqal
/r/LocalLLaMA/comments/1k1oqal/evaluating_a_llm_api_that_conforms_to_openai/
false
false
self
1
null
Voice AI Assistant
0
Trying to set up a voice assistant I can fine tune eventually, but I don’t know where I keep getting it wrong. I’m vibe coding (to be quite fair), using a Jabra 710 as the I/O device. Explored whisper, coqui, but even when I got it to work with the wake word, respond, albeit hallucinating a lot, trying to switch the assistant’s voice is where I got stuck. It’s not working seamlessly, so getting to the next point of fine-tuning is not even a stage I am at yet. I am using phi-2. Anyone have a repo I can leverage or any tips on a flow that works. I’ll appreciate it
2025-04-17T21:52:46
https://www.reddit.com/r/LocalLLaMA/comments/1k1osf2/voice_ai_assistant/
SolidRemote8316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1osf2
false
null
t3_1k1osf2
/r/LocalLLaMA/comments/1k1osf2/voice_ai_assistant/
false
false
self
0
null
What's the smallest model you've used that has decent success with basic Agents and Tool-Calling ?
4
Just a few very simple SmolAgents functions right now. I've noticed that - Qwen 14B instruct models work well until you quantize them under Q4. - Phi4 14B can adhere to instructions very well and calls the tools well, but the code logic and args it passes is sometimes wonky. - Qwen-Coder 14b is very good at calling tools, but there is a creative/reasoning portion to this task that it's poor at Anything smaller that's worked for you?
2025-04-17T21:54:20
https://www.reddit.com/r/LocalLLaMA/comments/1k1otp8/whats_the_smallest_model_youve_used_that_has/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1otp8
false
null
t3_1k1otp8
/r/LocalLLaMA/comments/1k1otp8/whats_the_smallest_model_youve_used_that_has/
false
false
self
4
null
Judging Embeddings
1
[removed]
2025-04-17T22:14:41
https://www.reddit.com/r/LocalLLaMA/comments/1k1pa2y/judging_embeddings/
remyxai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1pa2y
false
null
t3_1k1pa2y
/r/LocalLLaMA/comments/1k1pa2y/judging_embeddings/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VQKIVyFaXVqS_-qOiQM9Mb5NKxNJ2ZmI_eeHhx4p3JA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=108&crop=smart&auto=webp&s=55d30c9b3fb071b39044614c03cc1ecba14a48c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=216&crop=smart&auto=webp&s=fe482a54df69d9741c50bbcb894321457de762e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=320&crop=smart&auto=webp&s=837752a5c85c4963f7da3c75bf760afd14ac97b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=640&crop=smart&auto=webp&s=d7e456d129a7e0315ba3b18a9709904578fc92c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=960&crop=smart&auto=webp&s=310f85edae9aa2bbc19c1374c06bb4751461876e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=1080&crop=smart&auto=webp&s=7cd9d1a856d474743894c143d5e6d2152af760fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?auto=webp&s=9e4243400d6ee75b7bef46e9ca91421f9b9c7393', 'width': 1200}, 'variants': {}}]}
$1/Week — Compare ChatGPT, DeepSeek, Gemini & 60+ Models
1
[removed]
2025-04-17T22:16:18
[deleted]
1970-01-01T00:00:00
0
{}
1k1pbbz
false
null
t3_1k1pbbz
/r/LocalLLaMA/comments/1k1pbbz/1week_compare_chatgpt_deepseek_gemini_60_models/
false
false
default
1
null
$1/Week — Compare ChatGPT, DeepSeek, Gemini & 60+ Models
1
[removed]
2025-04-17T22:17:19
https://www.reddit.com/r/LocalLLaMA/comments/1k1pc51/1week_compare_chatgpt_deepseek_gemini_60_models/
Buffalo_Emotional
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1pc51
false
null
t3_1k1pc51
/r/LocalLLaMA/comments/1k1pc51/1week_compare_chatgpt_deepseek_gemini_60_models/
false
false
self
1
null
lmarena is a company now. which benchmark to trust?
1
[removed]
2025-04-17T22:21:53
https://www.reddit.com/r/LocalLLaMA/comments/1k1pfsa/lmarena_is_a_company_now_which_benchmark_to_trust/
Unlikely-Okra3892
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1pfsa
false
null
t3_1k1pfsa
/r/LocalLLaMA/comments/1k1pfsa/lmarena_is_a_company_now_which_benchmark_to_trust/
false
false
self
1
null
Judging Embeddings
1
[removed]
2025-04-17T22:23:39
https://www.reddit.com/r/LocalLLaMA/comments/1k1ph6z/judging_embeddings/
remyxai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1ph6z
false
null
t3_1k1ph6z
/r/LocalLLaMA/comments/1k1ph6z/judging_embeddings/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VQKIVyFaXVqS_-qOiQM9Mb5NKxNJ2ZmI_eeHhx4p3JA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=108&crop=smart&auto=webp&s=55d30c9b3fb071b39044614c03cc1ecba14a48c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=216&crop=smart&auto=webp&s=fe482a54df69d9741c50bbcb894321457de762e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=320&crop=smart&auto=webp&s=837752a5c85c4963f7da3c75bf760afd14ac97b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=640&crop=smart&auto=webp&s=d7e456d129a7e0315ba3b18a9709904578fc92c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=960&crop=smart&auto=webp&s=310f85edae9aa2bbc19c1374c06bb4751461876e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?width=1080&crop=smart&auto=webp&s=7cd9d1a856d474743894c143d5e6d2152af760fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NsfImz-Hd-s9PVdkXwfQNA2FkfxC0lz1qdqM5PvFS70.jpg?auto=webp&s=9e4243400d6ee75b7bef46e9ca91421f9b9c7393', 'width': 1200}, 'variants': {}}]}
identify of riverhollow/riveroaks from lmarena
1
Looks like **riverhollow** is gone and I get similar output from **gemini-2.5-flash-preview-04-17** Do you have some observations?
2025-04-17T22:24:30
https://www.reddit.com/r/LocalLLaMA/comments/1k1phty/identify_of_riverhollowriveroaks_from_lmarena/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1phty
false
null
t3_1k1phty
/r/LocalLLaMA/comments/1k1phty/identify_of_riverhollowriveroaks_from_lmarena/
false
false
self
1
null
Riverhollow from LMarena – Identity revealed?
0
Looks like **Riverhollow** from lmarena is gone and now I get similar output from **gemini-2.5-flash-preview-04-17** What are your experiences? I was searching for the true name of it for days
2025-04-17T22:27:04
https://www.reddit.com/r/LocalLLaMA/comments/1k1pjvz/riverhollow_from_lmarena_identity_revealed/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1pjvz
false
null
t3_1k1pjvz
/r/LocalLLaMA/comments/1k1pjvz/riverhollow_from_lmarena_identity_revealed/
false
false
self
0
null
Gemini 2.5 Flash - First impressions
15
Google is rapidly evolving its Gemini models, and I recently got my hands on the preview versions designated as **Gemini 2.5 Flash** and **Gemini 2.5 Pro**. Flash is positioned as the faster, more cost-effective option, while Pro targets peak performance, especially for complex reasoning. I put them head-to-head, particularly focusing on demanding tasks, and the results challenged the on-paper value proposition. **The Pricing Picture (As Experienced):** The per-token costs I encountered were: * **Gemini 2.5 Flash (Preview):** * Input: $0.15 / million tokens * Output (Standard/"Non-Thinking"): $0.60 / million tokens * Output ("Thinking Mode" - Implied High Usage Rate): $3.50 / million tokens * **Gemini 2.5 Pro (Preview):** * Input: $1.25 / million tokens * Output: $10.00 / million tokens **Performance & Thinking Quality: Flash's Achilles' Heel** This is where the cost-effectiveness argument started to unravel for me. My focus was on the models' reasoning and problem-solving abilities. * **Gemini 2.5 Flash's Thinking:** The quality of reasoning felt **very poor**. For complex problems requiring logical steps, its approach seemed inefficient and indirect. It struggled compared to the Pro version. * **Token Inefficiency:** The most critical issue was Flash's token consumption. It consistently required **5-6 times more tokens** than **Gemini 2.5 Pro** to tackle the same task. The thinking process felt like it was deliberately burning tokens rather than finding the most direct solution path. * **Subjective Benchmark:** I'd rate its reasoning quality slightly below a strong open-source model like Qwen-QWQ-32b. **The Real-World Test: STEM Exam Problems** To test this under pressure, I used tough STEM exam papers on both models. * **Gemini 2.5 Pro (Preview):** Handled the problems with relative token efficiency for its reasoning process. * **Gemini 2.5 Flash (Preview):** Despite its much lower per-token costs (even the $3.50 "thinking" rate vs Pro's $10.00), Flash **used vastly more tokens** for the same problems. **The Bottom Line: Effective Cost vs. Sticker Price** My conclusion based on these tests was clear: **For complex reasoning tasks, the preview version of Gemini 2.5 Flash effectively cost more per solved problem than the preview version of Gemini 2.5 Pro, despite Flash's lower per-token price.** The extreme token inefficiency completely negated the cheaper rate. Paying $3.50 per million for Flash's "thinking" output tokens felt especially wasteful given the low quality and high volume required.
2025-04-17T22:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1k1pw3z/gemini_25_flash_first_impressions/
Embarrassed-Way-1350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1pw3z
false
null
t3_1k1pw3z
/r/LocalLLaMA/comments/1k1pw3z/gemini_25_flash_first_impressions/
false
false
self
15
null
Generalized script for wakeword detection to run any script.
6
Wakeword: Generalized script that listens for a wakeword and runs a command you give it (so write a wrapper for your project that needs to be triggered with a wakeword): #!/usr/bin/env python3 # by jaggz.h {who is at} gmail.com (and jaggzh on github) # cc0 import asyncio import time import wave import pvporcupine import pyaudio import struct import io import argparse import subprocess # models_basedir="~/wakegen/venv/lib/python3.11/site-packages/pvporcupine/resources/keyword_files/linux" # alexa_linux.ppn grasshopper_linux.ppn picovoice_linux.ppn # americano_linux.ppn 'hey google_linux.ppn' porcupine_linux.ppn # blueberry_linux.ppn 'hey siri_linux.ppn' 'smart mirror_linux.ppn' # bumblebee_linux.ppn jarvis_linux.ppn snowboy_linux.ppn # computer_linux.ppn 'ok google_linux.ppn' terminator_linux.ppn # grapefruit_linux.ppn 'pico clock_linux.ppn' 'view glass_linux.ppn' # Configuration DEF_KEYWORD_PATH = "~/wakegen/venv/lib/python3.11/site-packages/pvporcupine/resources/keyword_files/linux/blueberry_linux.ppn" DEF_SENSITIVITY = 0.5 # Adjust sensitivity as needed DEF_SR = 16000 # Sample rate of the audio DEF_SAMPLE_WIDTH = 2 # Sample width of the audio DEF_CHANNELS = 1 # Number of audio channels DEF_RECORD_DURATION = .3 # Seconds to record DEF_FRAME_LENGTH = 512 # Porcupine's frame length # Initialize PyAudio audio = pyaudio.PyAudio() # Create Porcupine instance porcupine = pvporcupine.create( keyword_paths=[DEF_KEYWORD_PATH], sensitivities=[DEF_SENSITIVITY] ) # Define function to record audio async def record_audio(stream: pyaudio.Stream, frames_per_buffer: int): """Records audio for the specified duration.""" frames = [] start_time = time.time() while time.time() - start_time < RECORD_DURATION: data = stream.read(frames_per_buffer) frames.append(data) return b"".join(frames) # Define function to process audio with Porcupine async def process_audio(audio_data: bytes, cmd: str, non_blocking: bool): """Processes recorded audio with Porcupine and reports results.""" print("Processing audio... ", end='\r') # Add WAV header audio_data_with_header = add_wav_header( audio_data, SAMPLE_RATE, SAMPLE_WIDTH, CHANNELS ) # Now write the audio data with header with wave.open(io.BytesIO(audio_data_with_header), "rb") as wf: # Read audio in frames for i in range(0, len(audio_data), FRAME_LENGTH * SAMPLE_WIDTH * CHANNELS): frame_data = audio_data[i : i + FRAME_LENGTH * SAMPLE_WIDTH * CHANNELS] # Unpack audio data into a list of samples audio_samples = struct.unpack_from( "h" * FRAME_LENGTH, frame_data ) # Run Porcupine on the frame keyword_index = porcupine.process(audio_samples) if keyword_index >= 0: print(f"Wake word detected! (Index: {keyword_index})") if cmd: print(f"Executing command: {cmd}") try: if non_blocking: # Run command in the background subprocess.Popen(cmd.split()) else: # Run command and wait for it to finish subprocess.run(cmd.split(), check=True) except subprocess.CalledProcessError as e: # Handle error if command execution fails print(f"Command failed with error: {e}. Will try again next time.") except Exception as e: # Handle any other errors that might occur print(f"An unexpected error occurred: {e}. Will try again next time.") return # Exit after detection print("Wake word not detected. ", end='\r') async def main(keyword_path: str, sensitivity: float, sample_rate: int, sample_width: int, channels: int, record_duration: float, cmd: str, non_blocking: bool): """Main program loop.""" print("Listening for wake word...", end='\r') global SAMPLE_RATE, SAMPLE_WIDTH, CHANNELS, RECORD_DURATION, FRAME_LENGTH SAMPLE_RATE = sample_rate SAMPLE_WIDTH = sample_width CHANNELS = channels RECORD_DURATION = record_duration FRAME_LENGTH = porcupine.frame_length # Create PyAudio stream stream = audio.open( format=pyaudio.paInt16, channels=CHANNELS, rate=SAMPLE_RATE, input=True, frames_per_buffer=FRAME_LENGTH, ) while True: # Record audio audio_data = await record_audio(stream, FRAME_LENGTH) # Process audio with Porcupine await process_audio(audio_data, cmd, non_blocking) # Close stream stream.stop_stream() stream.close() def add_wav_header(audio_data: bytes, sample_rate: int, sample_width: int, channels: int): """Adds a WAV header to raw audio data.""" num_channels = channels frame_rate = sample_rate sample_width = sample_width num_frames = len(audio_data) // (sample_width * num_channels) # Compute audio data size data_size = num_frames * num_channels * sample_width # Create WAV header header = b"RIFF" header += struct.pack("<L", 36 + data_size) # Total file size header += b"WAVE" header += b"fmt " header += struct.pack("<L", 16) # Length of fmt chunk header += struct.pack("<H", 1) # Format code (1 for PCM) header += struct.pack("<H", num_channels) header += struct.pack("<L", frame_rate) header += struct.pack("<L", frame_rate * num_channels * sample_width) # Byte rate header += struct.pack("<H", num_channels * sample_width) # Block align header += struct.pack("<H", sample_width * 8) # Bits per sample header += b"data" header += struct.pack("<L", data_size) # Size of data chunk return header + audio_data if __name__ == "__main__": parser = argparse.ArgumentParser(prog="rhasspy-wake-porcupine-hermes") parser.add_argument( "-k", "--keyword", default=DEF_KEYWORD_PATH, help="Path to Porcupine keyword file (.ppn)", ) parser.add_argument( "-s", "--sensitivity", type=float, default=DEF_SENSITIVITY, help="Sensitivity of keyword (default: 0.5)", ) parser.add_argument( "-r", "--sample-rate", type=int, default=DEF_SR, help=f"Sample rate of the audio (default: {DEF_SR})", ) parser.add_argument( "-w", "--sample-width", type=int, default=DEF_SAMPLE_WIDTH, help="Sample width of the audio (default: 2)", ) parser.add_argument( "-C", "--channels", type=int, default=DEF_CHANNELS, help="Number of audio channels (default: 1)", ) parser.add_argument( "-d", "--record-duration", type=float, default=DEF_RECORD_DURATION, help=f"Seconds to record audio (default: {DEF_RECORD_DURATION})", ) parser.add_argument( "-c", "--cmd", help="Command to execute when wake word is detected", ) parser.add_argument( "-B", "--non-blocking", action="store_true", help="Run command in the background", ) args = parser.parse_args() # Recreate Porcupine with the provided keyword path and sensitivity porcupine = pvporcupine.create( keyword_paths=[args.keyword], sensitivities=[args.sensitivity] ) asyncio.run(main(args.keyword, args.sensitivity, args.sample_rate, args.sample_width, args.channels, args.record_duration, args.cmd, args.non_blocking)) # Terminate PyAudio audio.terminate()
2025-04-17T23:12:13
https://www.reddit.com/r/LocalLLaMA/comments/1k1qiat/generalized_script_for_wakeword_detection_to_run/
jaggzh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1qiat
false
null
t3_1k1qiat
/r/LocalLLaMA/comments/1k1qiat/generalized_script_for_wakeword_detection_to_run/
false
false
self
6
null
microsoft/MAI-DS-R1, DeepSeek R1 Post-Trained by Microsoft
332
2025-04-17T23:22:11
https://huggingface.co/microsoft/MAI-DS-R1
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1k1qpr6
false
null
t3_1k1qpr6
/r/LocalLLaMA/comments/1k1qpr6/microsoftmaidsr1_deepseek_r1_posttrained_by/
false
false
https://b.thumbs.redditm…RHQkMVBSrmsU.jpg
332
{'enabled': False, 'images': [{'id': 'gNeLuucO5JeU3USiy3XqA0PJjXOcO7-aG3kRw83Z07c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=108&crop=smart&auto=webp&s=84981e30fc6e2f7d00268369cf58c3c1bca65504', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=216&crop=smart&auto=webp&s=d2e19c29caf170d09b730606ff5e17c34b552fa2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=320&crop=smart&auto=webp&s=b9e149e7b6ccfa1b464248f0ecd48ad67ff34f4e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=640&crop=smart&auto=webp&s=140a805e8a294974bbef97d4e1035ef969130c5a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=960&crop=smart&auto=webp&s=cf8e496c6675ae3b2754218e6e3bae38ccfa29bf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?width=1080&crop=smart&auto=webp&s=f0e6df14eb3642ee00a80dcdc9d73cf1be0a481f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oacNTVfe15Ozahhiv8YMZ-Teu__pPBVygtAgzE9FP3c.jpg?auto=webp&s=cd3dac9f0a6c750e511ae05309175021aa37f32f', 'width': 1200}, 'variants': {}}]}
LLM distribution over Different OS
1
[removed]
2025-04-17T23:27:15
https://www.reddit.com/r/LocalLLaMA/comments/1k1qthz/llm_distribution_over_different_os/
No_Draft_8756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1qthz
false
null
t3_1k1qthz
/r/LocalLLaMA/comments/1k1qthz/llm_distribution_over_different_os/
false
false
self
1
null
I made this extension that applies the AI's changes semi-automatically without using an API.
14
Basically, the AI responds in a certain format, and when you paste it into the extension, it automatically executes the commands — creates files, etc. I made it in a short amount of time and wanted to know what you think. The idea was to have something that doesn't rely on APIs, which usually have a lot of limitations. It can be used with any AI — you just need to set the system instructions. If I were to continue developing it, I'd add more efficient editing (without needing to show the entire code), using search and replace, and so on. [https://marketplace.visualstudio.com/items/?itemName=FelpolinColorado.buildy](https://marketplace.visualstudio.com/items/?itemName=FelpolinColorado.buildy) **LIMITATIONS AND WARNING:** this extension is not secure at all. Even though it has a checkpoint system, it doesn’t ask for any permissions, so be very careful if you choose to use it.
2025-04-17T23:57:18
https://v.redd.it/jxyur24afhve1
Delicious-Trash6988
/r/LocalLLaMA/comments/1k1rfe7/i_made_this_extension_that_applies_the_ais/
1970-01-01T00:00:00
0
{}
1k1rfe7
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jxyur24afhve1/DASHPlaylist.mpd?a=1747655845%2CMGViYTEyNTY0ZjNmNjM3NjBlZTAyODZjODE4MDJjZDMzODExODM1ZmRjOTMxNDkwNDM5OTdjYTQ5MTQ2ZmYzYw%3D%3D&v=1&f=sd', 'duration': 86, 'fallback_url': 'https://v.redd.it/jxyur24afhve1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jxyur24afhve1/HLSPlaylist.m3u8?a=1747655845%2CZGM3MTgzYjhiYzNhM2Y2OWU1OWUzNTI3ODU1MTY2ZmMwNTUxMzE2NjU4YjY1YjhlMTYyZTMwODM5NGNjMzA3OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jxyur24afhve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k1rfe7
/r/LocalLLaMA/comments/1k1rfe7/i_made_this_extension_that_applies_the_ais/
false
false
https://external-preview…7639fee7ea576f2a
14
{'enabled': False, 'images': [{'id': 'bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=108&crop=smart&format=pjpg&auto=webp&s=9d720fba9105a9a7e8a448689d04d630a452aead', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=216&crop=smart&format=pjpg&auto=webp&s=7e4441cd18583db531cea105beb9f46744dbfcf3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=320&crop=smart&format=pjpg&auto=webp&s=6d92c1dd6a4e664004696dfa7a6abf5bf9346501', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=640&crop=smart&format=pjpg&auto=webp&s=76fc0d0131ae84c36e9e59db7118622e748d52c8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=960&crop=smart&format=pjpg&auto=webp&s=901644c6723af566aaac70667a8a708de9027fc8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a1b9a36be3b5eda88494bbc63f7314cb104f6b25', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bnM5aHUyNGFmaHZlMfXWszA2aBwWyFSKhA5ZAJVyEflwSFKnBokwQQKV24Gs.png?format=pjpg&auto=webp&s=a7a0708b57d3ad486332091a7e2173370e3a72ed', 'width': 1920}, 'variants': {}}]}
How to run Llama 4 fast, even though it's too big to fit in RAM
108
TL;DR: in your llama.cpp command, add: `-ngl 49 --override-tensor "([0-9]+).ffn_.*_exps.=CPU" --ubatch-size 1` Explanation: `-ngl 49` * offload all 49 layers to GPU `--override-tensor "([0-9]+).ffn_.*_exps.=CPU"` * ...except for the MOE weights `--ubatch-size 1` * process the prompt in batches of 1 at a time (instead of the default 512 - otherwise your SSD will be the bottleneck and prompt processing will be slower) This radically speeds up inference by taking advantage of LLama 4's MOE architecture. LLama 4 Maverick has 400 billion total parameters, but only 17 billion active parameters. Some are needed on every token generation, while others are only occasionally used. So if we put the parameters that are always needed onto GPU, those will be processed quickly, and there will just be a small number that need to be handled by the CPU. This works so well that the weights don't even need to all fit in your CPU's RAM - many of them can memory mapped from NVMe. My results with Llama 4 Maverick: * Unsloth's UD-Q4\_K\_XL quant is 227GB * Unsloth's Q8\_0 quant is 397GB Both of those are much bigger than my RAM + VRAM (128GB + 3x24GB). But with these tricks, I get 15 tokens per second with the UD-Q4\_K\_M and 6 tokens per second with the Q8\_0. Full llama.cpp server commands: Note: the `--override-tensor` command is tweaked because I had some extra VRAM available, so I offloaded most of the MOE layers to CPU, but loaded a few onto each GPU. UD-Q4\_K\_XL: `./llama-server -m Llama-4-Maverick-17B-128E-Instruct-UD-Q4_K_XL-00001-of-00005.gguf -ngl 49 -fa -c 16384 --override-tensor "([1][1-9]|[2-9][0-9]).ffn_.*_exps.=CPU,([0-2]).ffn_.*_exps.=CUDA0,([3-6]).ffn_.*_exps.=CUDA1,([7-9]|[1][0]).ffn_.*_exps.=CUDA2" --ubatch-size 1` Q8\_0: `./llama-server -m Llama-4-Maverick-17B-128E-Instruct-Q8_0-00001-of-00009.gguf -ngl 49 -fa -c 16384 --override-tensor "([6-9]|[1-9][0-9]).ffn_.*_exps.=CPU,([0-1]).ffn_.*_exps.=CUDA0,([2-3]).ffn_.*_exps.=CUDA1,([4-5]).ffn_.*_exps.=CUDA2" --ubatch-size 1` Credit goes to the people behind Unsloth for this knowledge. I hadn't seen people talking about this here, so I thought I'd make a post.
2025-04-18T00:03:01
https://www.reddit.com/r/LocalLLaMA/comments/1k1rjm1/how_to_run_llama_4_fast_even_though_its_too_big/
Klutzy-Snow8016
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1rjm1
false
null
t3_1k1rjm1
/r/LocalLLaMA/comments/1k1rjm1/how_to_run_llama_4_fast_even_though_its_too_big/
false
false
self
108
null
What are your favorite Tiny models?
1
[removed]
2025-04-18T00:38:47
https://www.reddit.com/r/LocalLLaMA/comments/1k1s8nw/what_are_your_favorite_tiny_models/
Radhe_sito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1s8nw
false
null
t3_1k1s8nw
/r/LocalLLaMA/comments/1k1s8nw/what_are_your_favorite_tiny_models/
false
false
self
1
null
What are your favorite Tiny models?
1
[removed]
2025-04-18T00:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1k1s9g0/what_are_your_favorite_tiny_models/
Radhe_sito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1s9g0
false
null
t3_1k1s9g0
/r/LocalLLaMA/comments/1k1s9g0/what_are_your_favorite_tiny_models/
false
false
self
1
null
Local LLM beginner here - a question about best models to use for my scenario
2
So I've only briefly dabbled into running LLMs locally, I have Ollama setup, and run a couple versions of the deepseek-r1 model. That's all my background for local LLMs. So I'm curious what would be best for my scenario. I downloaded all of my account's reddit data, past comments and posts. I want to create some kind of local model that uses the comments as training data, and enact my reddit persona. What local models or processes would work best for this?
2025-04-18T00:40:44
https://www.reddit.com/r/LocalLLaMA/comments/1k1sa1t/local_llm_beginner_here_a_question_about_best/
VaderOnReddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1sa1t
false
null
t3_1k1sa1t
/r/LocalLLaMA/comments/1k1sa1t/local_llm_beginner_here_a_question_about_best/
false
false
self
2
null
What are your favorite Tiny models?
1
[removed]
2025-04-18T00:45:08
https://www.reddit.com/r/LocalLLaMA/comments/1k1sd5x/what_are_your_favorite_tiny_models/
Radhe_sito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1sd5x
false
null
t3_1k1sd5x
/r/LocalLLaMA/comments/1k1sd5x/what_are_your_favorite_tiny_models/
false
false
self
1
null
Which is best LLM for academic writing now? (esp. for non-native English speakers)
1
[removed]
2025-04-18T01:00:39
https://www.reddit.com/r/LocalLLaMA/comments/1k1sny5/which_is_best_llm_for_academic_writing_now_esp/
SpinachAn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1sny5
false
null
t3_1k1sny5
/r/LocalLLaMA/comments/1k1sny5/which_is_best_llm_for_academic_writing_now_esp/
false
false
self
1
null
LMSA - LM Studio Assistant for Android
1
[removed]
2025-04-18T01:13:02
https://lmsa.app
CowMan30
lmsa.app
1970-01-01T00:00:00
0
{}
1k1swjp
false
null
t3_1k1swjp
/r/LocalLLaMA/comments/1k1swjp/lmsa_lm_studio_assistant_for_android/
false
false
https://b.thumbs.redditm…eW62gwoMtztI.jpg
1
{'enabled': False, 'images': [{'id': 'Ks-o3a_WQH9p4rGA6yWXbTAoSyeg-D_HLOJhQqW8ZCA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Y_vCaQDJl5lK4_piimMu9SrLo1vTfhOy7FqxgSPjKDA.jpg?width=108&crop=smart&auto=webp&s=36bd57b9babff558165d6d9095acf73a00f23575', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Y_vCaQDJl5lK4_piimMu9SrLo1vTfhOy7FqxgSPjKDA.jpg?width=216&crop=smart&auto=webp&s=82a25cb976cf0ddfef02561520cfd2c4ea6402dc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Y_vCaQDJl5lK4_piimMu9SrLo1vTfhOy7FqxgSPjKDA.jpg?width=320&crop=smart&auto=webp&s=d83e21f72447e9e1a7e1115751e9f86b496739a9', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/Y_vCaQDJl5lK4_piimMu9SrLo1vTfhOy7FqxgSPjKDA.jpg?auto=webp&s=c1c58057b58cfa032b1461824f937138b821bb3e', 'width': 512}, 'variants': {}}]}
Mac VRAM Allocation taskbar app VRAM Pro
2
I built a tiny macOS utility that does one very specific thing: **It unlocks additional GPU memory on Apple Silicon Macs.** Why? Because macOS doesn’t give you any control over VRAM — even though you *technically* have the bandwidth. I needed it for smoother performance in: * Running large LLMs * Blender and After Effects * Unity and Unreal previews So… I made **VRAM Pro**. It’s: * 🧠 Simple: Just sits in your menubar * 🔓 Lets you allocate more VRAM (temporarily and safely) * 🔐 Notarized, signed, Sparkle-updated * 💸 Totally free (I just built it for fun and needed it) # 📦 Download: [https://VRAMPro.com](https://VRAMPro.com) Would love feedback, and happy to tweak it based on use cases! Also — if you’ve got other obscure GPU tricks on macOS, I’d *love* to hear them. Thanks Reddit 🙏 PS: after I made this app someone created am open source copy: [https://github.com/PaulShiLi/Siliv](https://github.com/PaulShiLi/Siliv)
2025-04-18T01:24:53
https://www.reddit.com/gallery/1k1t4pa
DazzlingHedgehog6650
reddit.com
1970-01-01T00:00:00
0
{}
1k1t4pa
false
null
t3_1k1t4pa
/r/LocalLLaMA/comments/1k1t4pa/mac_vram_allocation_taskbar_app_vram_pro/
false
false
https://b.thumbs.redditm…u4zbXPyj-rJE.jpg
2
null
Instantly allocate more graphics memory on your Mac VRAM Pro
38
I built a tiny macOS utility that does one very specific thing: **It unlocks additional GPU memory on Apple Silicon Macs.** Why? Because macOS doesn’t give you any control over VRAM — and hard caps it, leading to swap issues in certain use cases. I needed it for performance in: * Running large LLMs * Blender and After Effects * Unity and Unreal previews So… I made **VRAM Pro**. It’s: * 🧠 Simple: Just sits in your menubar * 🔓 Lets you allocate more VRAM * 🔐 Notarized, signed, autoupdates # 📦 Download: [https://VRAMPro.com](https://vrampro.com/) Do you need this app? No! You can do this with various commands in terminal. But wanted a nice and easy GUI way to do this. Would love feedback, and happy to tweak it based on use cases! Also — if you’ve got other obscure GPU tricks on macOS, I’d *love* to hear them. Thanks Reddit 🙏 PS: after I made this app someone created am open source copy: [https://github.com/PaulShiLi/Siliv](https://github.com/PaulShiLi/Siliv)
2025-04-18T01:38:15
https://www.reddit.com/gallery/1k1tdpa
DazzlingHedgehog6650
reddit.com
1970-01-01T00:00:00
0
{}
1k1tdpa
false
null
t3_1k1tdpa
/r/LocalLLaMA/comments/1k1tdpa/instantly_allocate_more_graphics_memory_on_your/
false
false
https://b.thumbs.redditm…ebX7qx7INGsU.jpg
38
null
How to pick when to use which?
0
With so many models releasing per week. Is there any single interface other then Ollama for also paid models that you guys use to decide when to use which showing benchmarking, type of data etc etc?
2025-04-18T01:51:09
https://www.reddit.com/r/LocalLLaMA/comments/1k1tmay/how_to_pick_when_to_use_which/
semicolon-10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1tmay
false
null
t3_1k1tmay
/r/LocalLLaMA/comments/1k1tmay/how_to_pick_when_to_use_which/
false
false
self
0
null
Ollama version history, change listings, backwards-compatibility
0
Apologies for the probably dumb question, but where on GitHub does one look to get a detailed description of what has changed from one version of Ollama to the next? I ask because I went to load a new model and received a message of "*The model you are attempting to pull requires a newer version of Ollam*a." I am hesitant to change anything that might break the model setups I have running now which are running perfectly, but I also want to feel more confident moving forward in adding new models, and to see if Ollama has added any functionality that I may have manually coded for that I may not need to going forward... Thanks for your direction & guideance.
2025-04-18T01:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1k1tq2b/ollama_version_history_change_listings/
DelosBoard2052
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1tq2b
false
null
t3_1k1tq2b
/r/LocalLLaMA/comments/1k1tq2b/ollama_version_history_change_listings/
false
false
self
0
null
I cannot seem to be able to use the YuE Music Ai in Pinokio
1
[removed]
2025-04-18T01:59:04
https://www.reddit.com/r/LocalLLaMA/comments/1k1trl7/i_cannot_seem_to_be_able_to_use_the_yue_music_ai/
Select_Rutabaga7384
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1trl7
false
null
t3_1k1trl7
/r/LocalLLaMA/comments/1k1trl7/i_cannot_seem_to_be_able_to_use_the_yue_music_ai/
false
false
self
1
null
New to all of this. Any point to adding a second card with 1GB GDDR5? Have RTX 4080, up for planning to buy more DDR5.
1
[removed]
2025-04-18T02:21:51
https://www.reddit.com/r/LocalLLaMA/comments/1k1u6r7/new_to_all_of_this_any_point_to_adding_a_second/
17usc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1u6r7
false
null
t3_1k1u6r7
/r/LocalLLaMA/comments/1k1u6r7/new_to_all_of_this_any_point_to_adding_a_second/
false
false
self
1
null
Attempting to Solve the Cross-Platform AI Billing Challenge as a Solo Engineer/Founder - Need Feedback
1
[removed]
2025-04-18T02:30:19
https://www.reddit.com/r/LocalLLaMA/comments/1k1uchw/attempting_to_solve_the_crossplatform_ai_billing/
Frozen-Insightful-22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1uchw
false
null
t3_1k1uchw
/r/LocalLLaMA/comments/1k1uchw/attempting_to_solve_the_crossplatform_ai_billing/
false
false
self
1
null
It's been hours already without a new open source SOTA being released. Is open source LLMs dead? Is this it???
337
RIP LLMS!
2025-04-18T02:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1k1ujk4/its_been_hours_already_without_a_new_open_source/
DamiaHeavyIndustries
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1ujk4
false
null
t3_1k1ujk4
/r/LocalLLaMA/comments/1k1ujk4/its_been_hours_already_without_a_new_open_source/
false
false
self
337
null
Multilingual pretraining datasets
3
I’m planning to continuous retrain multilingual models and would love to know which multilingual pretraining datasets are available on Hugging Face. Can anyone share some suggestions or links to datasets that cover multiple languages? Thanks in advance!
2025-04-18T02:56:20
https://www.reddit.com/r/LocalLLaMA/comments/1k1utq4/multilingual_pretraining_datasets/
MarySmith2021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1utq4
false
null
t3_1k1utq4
/r/LocalLLaMA/comments/1k1utq4/multilingual_pretraining_datasets/
false
false
self
3
null
Embedding Model Recommended for RAG
1
[removed]
2025-04-18T03:16:43
https://www.reddit.com/r/LocalLLaMA/comments/1k1v6pt/embedding_model_recommended_for_rag/
rich188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1v6pt
false
null
t3_1k1v6pt
/r/LocalLLaMA/comments/1k1v6pt/embedding_model_recommended_for_rag/
false
false
self
1
null
RAG/Embedding Model Suggestion
1
[removed]
2025-04-18T03:17:28
https://www.reddit.com/r/LocalLLaMA/comments/1k1v78j/ragembedding_model_suggestion/
rich188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1v78j
false
null
t3_1k1v78j
/r/LocalLLaMA/comments/1k1v78j/ragembedding_model_suggestion/
false
false
self
1
null
CSM 1B is real-time now and has fine-tuning
187
[https://github.com/davidbrowne17/csm-streaming](https://github.com/davidbrowne17/csm-streaming) Not sure if many of you have been following this model, but the open-source community has managed to reach real-time with streaming and figured out fine-tuning. This is my repo with fine-tuning and a chat demo, my version of fine-tuning is lora but there is also full fine tuning out there as well. Give it a try and let me know how it compares to other TTS models.
2025-04-18T03:21:15
https://www.reddit.com/r/LocalLLaMA/comments/1k1v9rq/csm_1b_is_realtime_now_and_has_finetuning/
SovietWarBear17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1v9rq
false
null
t3_1k1v9rq
/r/LocalLLaMA/comments/1k1v9rq/csm_1b_is_realtime_now_and_has_finetuning/
false
false
self
187
{'enabled': False, 'images': [{'id': 'LCyVaSFQLCAx8kjuQVBBo2_vWEBDGqxRFTFz6tdmxj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=108&crop=smart&auto=webp&s=fbad08d22911d2dcef13553751375e098f952a11', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=216&crop=smart&auto=webp&s=105283a2792f4c0a5c006516982fcf305cf43eed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=320&crop=smart&auto=webp&s=fbb333f746520b10eb9b884ea292a4fa64404e2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=640&crop=smart&auto=webp&s=e182954dc37a82f21efb78c50e0587540c5e540e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=960&crop=smart&auto=webp&s=450554a908fdc33047345d839733e757899ef4bd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?width=1080&crop=smart&auto=webp&s=cc8a72e3ce7da72cabe25496acea498bfb597abe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gb2LuERauBVYmlwqvVwLQOzt-W6J1LSgxmnCHRm7KhI.jpg?auto=webp&s=4550789df7ae5f9b892fc8f1af8979d634890ae8', 'width': 1200}, 'variants': {}}]}
Best 7b-14b models for roleplaying?
7
What are some of the best uncensored models to run with 12gb of VRAM that work good for roleplaying?
2025-04-18T03:44:18
https://www.reddit.com/r/LocalLLaMA/comments/1k1vobv/best_7b14b_models_for_roleplaying/
AsDaylight_Dies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1vobv
false
null
t3_1k1vobv
/r/LocalLLaMA/comments/1k1vobv/best_7b14b_models_for_roleplaying/
false
false
self
7
null
No API keys, no cloud. Just local Al + tools that actually work. Too much to ask?
129
It's been about a month since we first posted Clara here. Clara is a local-first Al assistant - think of it like ChatGPT, but fully private and running on your own machine using Ollama. Since the initial release, I've had a small group of users try it out, and I've pushed several updates based on real usage and feedback. The biggest update is that Clara now comes with n8n built-in. That means you can now build and run your own tools directly inside the assistant - no setup needed, no external services. Just open Clara and start automating. With the n8n integration, Clara can now do more than chat. You can use it to: • Check your emails • Manage your calendar • Call APIs • Run scheduled tasks • Process webhooks • Connect to databases • And anything else you can wire up using n8n's visual flow builder The assistant can trigger these workflows directly - so you can talk to Clara and ask it to do real tasks, using tools that run entirely on your device. Everything happens locally. No data goes out, no accounts, no cloud dependency. If you're someone who wants full control of your Al and automation setup, this might be something worth trying. You can check out the project here: GitHub: https://github.com/badboysm890/ClaraVerse Thanks to everyone who's been trying it and sending feedback. Still improving things - more updates soon. Note: I'm aware of great projects like OpenWebUI and LibreChat. Clara takes a slightly different approach - focusing on reducing dependencies, offering a native desktop app, and making the overall experience more user-friendly so that more people can easily get started with local Al.
2025-04-18T03:56:37
https://www.reddit.com/r/LocalLLaMA/comments/1k1vvy3/no_api_keys_no_cloud_just_local_al_tools_that/
aruntemme
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1vvy3
false
null
t3_1k1vvy3
/r/LocalLLaMA/comments/1k1vvy3/no_api_keys_no_cloud_just_local_al_tools_that/
false
false
self
129
{'enabled': False, 'images': [{'id': 'p7ljLQU7bp3E2GIpmDZ67UksN7JuGQCQ2apWw-NPzRQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=108&crop=smart&auto=webp&s=27d0b23a9a4d3110c37c9c8f8cfab6e328d6db86', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=216&crop=smart&auto=webp&s=152edc76f543e5729d3c491d2d4542b4b499b152', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=320&crop=smart&auto=webp&s=1ddd9ee0afc6041982c40c39b0c35686aea07128', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=640&crop=smart&auto=webp&s=0c582936f7ae5dcb6e9ed0a553a27bd35295a65c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=960&crop=smart&auto=webp&s=8ab983dc6b66ad6ab70d3d8a84ae52e5bb920348', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?width=1080&crop=smart&auto=webp&s=03928a5fb24857a64df8bcd05443e8b10f3f344b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Bv5S_BWRPsbNpVHFb4PUPcV-pnozuaMHiCZ4G3Q8ICA.jpg?auto=webp&s=262e747d7295bc13965d3d5a35f04ebad6c7d3a0', 'width': 1200}, 'variants': {}}]}
If you had a time machine and went back 10 years in the past armed only with your laptop with some local ai on it. How could you use it to make money?
0
One easy technique would be to write articles as a freelancer.
2025-04-18T05:24:56
https://www.reddit.com/r/LocalLLaMA/comments/1k1xc05/if_you_had_a_time_machine_and_went_back_10_years/
ImaginaryRea1ity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1xc05
false
null
t3_1k1xc05
/r/LocalLLaMA/comments/1k1xc05/if_you_had_a_time_machine_and_went_back_10_years/
false
false
self
0
null
vLLM with transformers backend
56
You can try out the new integration with which you can run ANY transformers model with vLLM (even if it is not natively supported by vLLM) Read more about it here: [https://blog.vllm.ai/2025/04/11/transformers-backend.html](https://blog.vllm.ai/2025/04/11/transformers-backend.html) What can one do with this: 1. 1. Read the blog 😌 2. 2. Contribute to transformers - making models vLLM compatible 3. 3. Raise issues if you spot a bug with the integration Vision Language Model support is coming very soon! Until any further announcements, we would love for everyone to stick using this integration with text only models 🤗
2025-04-18T05:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1k1xgco/vllm_with_transformers_backend/
Disastrous-Work-1632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1xgco
false
null
t3_1k1xgco
/r/LocalLLaMA/comments/1k1xgco/vllm_with_transformers_backend/
false
false
self
56
{'enabled': False, 'images': [{'id': '-oF_CESVxWGYmnXEeT_RMEAFXNIW4qe8IPBbPGyqdrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=108&crop=smart&auto=webp&s=9d0de14b8d3744134afab6fa341a4bbef35e5d28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=216&crop=smart&auto=webp&s=91e2fc49d9a6eb3974fbf79b92a937b4283f07d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=320&crop=smart&auto=webp&s=ac463ad0def021662f135cfe8e31edbc2c0e9350', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=640&crop=smart&auto=webp&s=f1c5f3722aee60cc6578ffa6925b2883590d7cec', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=960&crop=smart&auto=webp&s=26fd2013f38f300801487544dba88de4cbab2c23', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?width=1080&crop=smart&auto=webp&s=6053880162669a44009fd67cdb89349b45d60130', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/wgHpfpaqH4e5VLCswLxIoCAioXF-UEidtLP_BvomJ_U.jpg?auto=webp&s=513edc635fac5581cfbe8d8af6f8d7fdd50b1ece', 'width': 1300}, 'variants': {}}]}
Gemma3 VS Qwen2.5 math.
5
I manually did the math to calculate the highest voltage a solar panel in my array can be using temperature coefficient. 46.50 + (1-21-45) x (-.31 x 46.50/100) the correct answer is 55.87 I wanted to make sure my math was right so I asked. \### query 46.50 + (1-21-45) x (-.31 x 46.50/100) use two decimal points. \### gemma3:4b-it-q4\_K\_M Let's break down the calculation step by step: 1. **(1 - 21 - 45)**: 1 - 21 - 45 = -65 2. **(-.31 x 46.50 / 100)**: -.31 \* 46.50 = -14.415 -14.415 / 100 = -0.14415 3. **(1 - 21 - 45) x (-.31 x 46.50 / 100)**: -65 \* (-0.14415) = 9.25325 4. **46.50 + (1 - 21 - 45) x (-.31 x 46.50 / 100)**: 46.50 + 9.25325 = 55.75325 Rounded to two decimal places, the result is 55.75. Therefore, the answer is **55.75** \### gemma3:4b-it-q8\_0 Okay, let's break down this calculation step by step: 1. **Calculate the parentheses first:** (1 - 21 - 45) = -65 2. **Calculate the multiplication within the parentheses:** \-0.31 \* 46.50 = 14.565 3. **Divide by 100:** 14.565 / 100 = 0.14565 4. **Multiply by -1:** \-0.14565 \* -1 = 0.14565 5. **Add the entire expression:** 46.50 + 0.14565 = 46.64565 6. **Round to two decimal places:** 46.64565 rounds to 46.65 Therefore, 46.50 + (1-21-45) x (-.31 x 46.50 /100) = **46.65** \### gemma3:12b-it-q4\_K\_M Okay, let's break this down step by step following the order of operations (PEMDAS/BODMAS): 1. **Parentheses/Brackets first:** (1-21-45) * 1 - 21 = -20 * \-20 - 45 = -65 1. **Inside the remaining parentheses:** (-.31 x 46.50) * \-0.31 x 46.50 = -14.385 1. **Division:** \-14.385 / 100 * \-14.385 / 100 = -0.14385 1. **Multiplication:** \-65 x -0.14385 * \-65 \* -0.14385 = 9.34925 2. **Addition:** 46.50 + 9.34925 * 46.50 + 9.34925 = 55.84925 1. **Round to two decimal places:** 55.84925 rounds to **55.85** Therefore, the answer is **55.85** \### gemma3:12b-it-q8\_0 Okay, let's break this down step-by-step, following the order of operations (PEMDAS/BODMAS): 1. **Parentheses/Brackets:** * Inside the first set of parentheses: 1 - 21 - 45 = -65 * Inside the second set of parentheses: -.31 x 46.50 = -14.385 * Then, divide the result of the second set of parentheses by 100: -14.385 / 100 = -0.14385 2. **Multiplication:** * (-65) x (-0.14385) = 9.35025 3. **Addition:** * 46.50 + 9.35025 = 55.85025 4. **Round to two decimal places:** * 55.85 **Therefore, the answer is 55.85** \### gemma3:27b-it-q4\_K\_M Here's the calculation with two decimal points: 1. **Parentheses first:** (1 - 21 - 45) = -65 2. **Multiplication within parentheses:** \-0.31 \* 46.50 = -14.415 3. **Division:** \-14.415 / 100 = -0.14415 4. **Multiplication:** \-65 \* -0.14415 = 9.36975 5. **Addition:** 46.50 + 9.36975 = 55.86975 Rounding to two decimal places, the answer is **55.87**. <<<--- Bingo! \### qwen2.5:3b-instruct-q4\_K\_MToday at 1:09 AM To solve the expression 46.50+(1−21−45)×(−.31×46.50/100), we need to follow the order of operations, which is Parentheses, Exponents, Multiplication and Division (from left to right), Addition and Subtraction (from left to right). First, let's simplify inside the parentheses: 1. Calculate (1−21−45):1−21=−20−20−45=−65 So, (1−21−45)=−65. Next, calculate the expression inside the second set of parentheses: 1. Calculate −.31×46.50/100:−.31×46.50=−14.415−14.415/100=−0.14415 Now, multiply this result by the value inside the first set of parentheses: 1. Multiply −65 by −0.14415:−65×−.14415=9.48975 Finally, add this to 46.50: 1. Add the results:46.50+9.48975=55.98975 Rounding to two decimal points, we get 55.99. So, the final answer is **55.99**. \### qwen2.5:3b-instruct-q8\_0 To solve the expression 46.50+(1−21−45)×(−0.31×46.50/100), let's break it down step by step. First, calculate inside the parentheses: 1−21−45=−65 Next, calculate the expression within the second set of parentheses: −0.31×46.50/100 To do this: −0.31×46.50=−14.415 Then divide by 100: −14.415/100=−0.14415 Now, multiply the result from above by −65: −65×(−0.14415)=9.36725 Finally, add this to 46.50: 46.50+9.36725=55.86725 Rounded to two decimal points, the result is: 55.87 <<<-- Bingo, wait a minute, what? Really? 3B? \################################################## it took Gemma3-27B to get this correct and Qwen2.5 did it with a 3B model? Really? What in the heck is going on here? I have always used qwen2.5 but when Gemma 3 arrived, I decided to take it for a spin and used it for while and for a lot of stuff it is pretty good but qwen2.5 destroyed Gemma 3 here. Has anyone seen similar? I know LLM's are known to be bad a math but this kind of puts Qwen2.5 on top of my list as it seems to be great at pretty much everything I have thrown at it. Anything better than Qwen2.5 at a reasonable size like under 32B? Or is Qwen2.5 still king of the hill at 32B or below?
2025-04-18T05:39:10
https://www.reddit.com/r/LocalLLaMA/comments/1k1xjsu/gemma3_vs_qwen25_math/
Boricua-vet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1xjsu
false
null
t3_1k1xjsu
/r/LocalLLaMA/comments/1k1xjsu/gemma3_vs_qwen25_math/
false
false
self
5
null
I want to know if its possible to run a llama model in a old CPU.
2
I'm new to using Llama and I'd like to know if there are super lightweight models that can run on weak system's. The system spec in question: Intel(R) Pentium(R) Silver N6005 @ 2.00GHz, 1997 Mhz, 4 Core(s), 4 Logical Processor(s),with 16 GB ram.
2025-04-18T05:53:47
https://www.reddit.com/r/LocalLLaMA/comments/1k1xrni/i_want_to_know_if_its_possible_to_run_a_llama/
Hoshino_Ruby
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1xrni
false
null
t3_1k1xrni
/r/LocalLLaMA/comments/1k1xrni/i_want_to_know_if_its_possible_to_run_a_llama/
false
false
self
2
null
Where is the promised open Grok 2?
210
As far as I know, Grok 2 was supposed to be open-sourced some time after Grok 3's release. But I'm afraid that by the time they decide to open-source Grok 2, it will already be completely obsolete. This is because even now, it significantly lags behind in performance compared to the likes of DeepSeek V3, and we also have Qwen 3 and Llama 4 Reasoning on the horizon (not to mention a potential open model from OpenAI). I believe that when they eventually decide to release it to the community, it will be of no use to anyone anymore, much like what happened with Grok 1. What are your thoughts on this?
2025-04-18T06:01:19
https://www.reddit.com/r/LocalLLaMA/comments/1k1xvvr/where_is_the_promised_open_grok_2/
AlexBefest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1xvvr
false
null
t3_1k1xvvr
/r/LocalLLaMA/comments/1k1xvvr/where_is_the_promised_open_grok_2/
false
false
self
210
null
Stop chasing agent frameworks. A mental model on separating out the low-level from high-level logic.
1
[removed]
2025-04-18T06:20:49
https://www.reddit.com/r/LocalLLaMA/comments/1k1y68l/stop_chasing_agent_frameworks_a_mental_model_on/
AdditionalWeb107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1y68l
false
null
t3_1k1y68l
/r/LocalLLaMA/comments/1k1y68l/stop_chasing_agent_frameworks_a_mental_model_on/
false
false
self
1
null
Stable Diffusion Now Optimized for AMD Radeon™ GPUs and Ryzen™ AI APUs — Stability AI
1
2025-04-18T06:48:08
https://stability.ai/news/stable-diffusion-now-optimized-for-amd-radeon-gpus
Primary-Wear-2460
stability.ai
1970-01-01T00:00:00
0
{}
1k1ykre
false
null
t3_1k1ykre
/r/LocalLLaMA/comments/1k1ykre/stable_diffusion_now_optimized_for_amd_radeon/
false
false
https://a.thumbs.redditm…WOKK3YGK7jz4.jpg
1
{'enabled': False, 'images': [{'id': 'IkXeNjBdluDhw8S_2I3zEGHfDGKXWv8sPJN2Y6jnPfY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=108&crop=smart&auto=webp&s=c08562bc27ea3070fe8d0302eec103ec4a9de2d8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=216&crop=smart&auto=webp&s=a0a01cae33eaa90058e922ee224f933800f68082', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=320&crop=smart&auto=webp&s=36ec437494a12929b0349d032acf6bff108268a9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=640&crop=smart&auto=webp&s=597042a4c6f29c1d0879772ae9a3b8a06fabac7d', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=960&crop=smart&auto=webp&s=f06e66a2f74c10a6fdd5da7689ffe5fe53a0da04', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?width=1080&crop=smart&auto=webp&s=ee677a347eb9b6bae482678bca615fddebb850c1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/gHhwm7yG7wuXqGlE606RTVZfbDX9-mM6cPLepfpTuLQ.jpg?auto=webp&s=7cd4364774532dd9bde479583f33aa9856041448', 'width': 1080}, 'variants': {}}]}
Analyzing Technical Document Images with Janus-Pro 1B
1
I'm currently testing Janus-Pro for image analysis of technical documents, using the app from this GitHub repo: [`https://github.com/deepseek-ai/Janus`](https://github.com/deepseek-ai/Janus). I'm running it locally on a system with an Nvidia P4000 GPU (8GB VRAM), and I've switched the model from 7B to 1B to ensure it works on this hardware. While it runs, the output tends to get cut off, and a lot of critical information is missing. Here's the image I'm using for input: [Janus Pro Plot and Graph](https://raw.githubusercontent.com/deepseek-ai/Janus/refs/heads/main/images/teaser_januspro.png) Has anyone had better luck with Janus-Pro 1B? Were you able to get more complete or accurate outputs?
2025-04-18T06:52:35
https://www.reddit.com/r/LocalLLaMA/comments/1k1yn4v/analyzing_technical_document_images_with_januspro/
kerkerby
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1yn4v
false
null
t3_1k1yn4v
/r/LocalLLaMA/comments/1k1yn4v/analyzing_technical_document_images_with_januspro/
false
false
self
1
{'enabled': False, 'images': [{'id': 'REu6cSXov90PNj67lx_19oEVywExCT5SE8dCCqUm8Kw', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/7ZKZnoN8pJGulObWEYmKvylqUzjTj6bycuzfSUjxXgQ.png?width=108&crop=smart&auto=webp&s=839aaf59e78b46467446340f7e110cc837114b52', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/7ZKZnoN8pJGulObWEYmKvylqUzjTj6bycuzfSUjxXgQ.png?width=216&crop=smart&auto=webp&s=88995c93e92fcde5c1fe2cefe334b54904f4b55a', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/7ZKZnoN8pJGulObWEYmKvylqUzjTj6bycuzfSUjxXgQ.png?width=320&crop=smart&auto=webp&s=dbd7e543c35732ab5cd53b314fcf27df6cfed8b9', 'width': 320}, {'height': 284, 'url': 'https://external-preview.redd.it/7ZKZnoN8pJGulObWEYmKvylqUzjTj6bycuzfSUjxXgQ.png?width=640&crop=smart&auto=webp&s=471f546ba64d0a7386f8f67eb7552116cf9f7506', 'width': 640}], 'source': {'height': 407, 'url': 'https://external-preview.redd.it/7ZKZnoN8pJGulObWEYmKvylqUzjTj6bycuzfSUjxXgQ.png?auto=webp&s=83bd41f89d61ef70af2b19c55b63b4109316dcb8', 'width': 916}, 'variants': {}}]}
what is this
1
2025-04-18T06:59:24
https://i.redd.it/2vhlgw4okjve1.jpeg
Yareyaredaze880
i.redd.it
1970-01-01T00:00:00
0
{}
1k1yql6
false
null
t3_1k1yql6
/r/LocalLLaMA/comments/1k1yql6/what_is_this/
false
false
https://b.thumbs.redditm…nFuuoo8faYKA.jpg
1
{'enabled': True, 'images': [{'id': 'lJk4PW_0yUjjQsXSE7h_wrTOd4QTDkpIlsD59jvWk84', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=108&crop=smart&auto=webp&s=8c873d44ae5220bc09b4b6cf1e7daf69421ffd00', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=216&crop=smart&auto=webp&s=87e647c7be70117e924118c055b28be52a7eb4b9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=320&crop=smart&auto=webp&s=f2d6764d5e7b94a051596bf381fc9f038eac5518', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=640&crop=smart&auto=webp&s=4bb3ef2b24ef761938b10b8a302524cf06520422', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=960&crop=smart&auto=webp&s=79cb14d71eb08d13d42a49539b393b7d7ec1d44b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?width=1080&crop=smart&auto=webp&s=58b12793572b513cb0cd912b60d6a4aa79ab49a5', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/2vhlgw4okjve1.jpeg?auto=webp&s=144fc873bc661e3c380b5268b9b642bc269aafd1', 'width': 1080}, 'variants': {}}]}
New toy is here! A free general purpose, autonomous online AI agent!
2
I'm building a general purpose, autonomous online AI agent application ([kragent.ai](https://kragent.ai)) (yes, it's similar to Manus, but this project started long before Manus appeared). Currently, the core agent is already online, and I'm adding more available tools to it, including better MCP support. It's now open for free public beta testing. Hope everyone enjoys it. Note, due to needing to control the costs of free operation, I've chosen the low-cost Qwen model. If you want to try the full-power effect, consider setting up your own Claude API key (don't bother trying OpenAI's models, they're particularly lazy when it comes to unofficial function calling mechanisms).
2025-04-18T07:39:36
https://www.reddit.com/r/LocalLLaMA/comments/1k1zas5/new_toy_is_here_a_free_general_purpose_autonomous/
Steven_Lu_137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zas5
false
null
t3_1k1zas5
/r/LocalLLaMA/comments/1k1zas5/new_toy_is_here_a_free_general_purpose_autonomous/
false
false
self
2
null
What’s the most impressive local AI demo?
3
Imagine you have a group of nerdy and relatively tech-savvy friends ask you to show off some cool AI demo. What would you show them to impress them and blow their mind?
2025-04-18T07:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1k1zffp/whats_the_most_impressive_local_ai_demo/
lakySK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zffp
false
null
t3_1k1zffp
/r/LocalLLaMA/comments/1k1zffp/whats_the_most_impressive_local_ai_demo/
false
false
self
3
null
Best vibe coding agent/tool?
0
I am really confused which tools have best performance. There are just too many of them like cursor, trae, windsurf, copilot, claude-code(cli), dozens of other agents on [swebench.com](http://swebench.com) leaderboards, and now open AI launched codex cli. It's not like the code quality is only dependent on the LLM model but also hugely affected by which environment/agent the model is used in. I have been using trae for a long time since it gave top models for free, but now I frequently run into rate limits. Also copilot is limited for free users even if I bring my own API which I am super pissed about. Is there any leaderboard which ranks all of them? Or anyone who tested all rigorously please shade some light.
2025-04-18T07:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1k1zg5t/best_vibe_coding_agenttool/
Hv_V
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zg5t
false
null
t3_1k1zg5t
/r/LocalLLaMA/comments/1k1zg5t/best_vibe_coding_agenttool/
false
false
self
0
null
True local ChatGPT equivalent possible?
1
[removed]
2025-04-18T07:51:34
https://www.reddit.com/r/LocalLLaMA/comments/1k1zgqd/true_local_chatgpt_equivalent_possible/
Existing_Shoe_5594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zgqd
false
null
t3_1k1zgqd
/r/LocalLLaMA/comments/1k1zgqd/true_local_chatgpt_equivalent_possible/
false
false
self
1
null
Just wondering about running llama 70b setup.
1
[removed]
2025-04-18T07:53:08
https://www.reddit.com/r/LocalLLaMA/comments/1k1zhj5/just_wondering_about_running_llama_70b_setup/
Appropriate-Tart-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zhj5
false
null
t3_1k1zhj5
/r/LocalLLaMA/comments/1k1zhj5/just_wondering_about_running_llama_70b_setup/
false
false
self
1
null
Just wondering about running Llama 70b.
1
[removed]
2025-04-18T07:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1k1zi6p/just_wondering_about_running_llama_70b/
Appropriate-Tart-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zi6p
false
null
t3_1k1zi6p
/r/LocalLLaMA/comments/1k1zi6p/just_wondering_about_running_llama_70b/
true
false
spoiler
1
null
LLms about 70b suitable gpus combination recomendations?
1
[removed]
2025-04-18T07:56:29
https://www.reddit.com/r/LocalLLaMA/comments/1k1zj5t/llms_about_70b_suitable_gpus_combination/
Appropriate-Tart-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zj5t
false
null
t3_1k1zj5t
/r/LocalLLaMA/comments/1k1zj5t/llms_about_70b_suitable_gpus_combination/
false
false
self
1
null
LLms gpus suitable for 70b combination recomendations?
1
[removed]
2025-04-18T07:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1k1zk1u/llms_gpus_suitable_for_70b_combination/
Appropriate-Tart-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zk1u
false
null
t3_1k1zk1u
/r/LocalLLaMA/comments/1k1zk1u/llms_gpus_suitable_for_70b_combination/
false
false
self
1
null
LLms gpus combination suitable for 70bs recomendations?
1
[removed]
2025-04-18T07:59:47
https://www.reddit.com/r/LocalLLaMA/comments/1k1zkq6/llms_gpus_combination_suitable_for_70bs/
Appropriate-Tart-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k1zkq6
false
null
t3_1k1zkq6
/r/LocalLLaMA/comments/1k1zkq6/llms_gpus_combination_suitable_for_70bs/
false
false
self
1
null
OpenAI API Codex connector
2
OpenAI had released their version of coding assistant as open source. No big model library supports their Resources api yet, so they can’t work with it. I wrote [a wrapper to make any OpenAI compatible library](https://github.com/OriNachum/openai-to-codex-wrapper), and verified it works (in the image you can see Mistral on Ollama) It is still missing some features, but I would appreciate your support in stars, issues, suggestions and even pull requests if you are inclined for it. I want to support the stateful features the other libraries don’t want to support and are needed for Codex (and more). I verified it works in my main repo, in my [demo AI assistant that can hear, think and speak](https://github.com/OriNachum/autonomous-intelligence/tree/main/baby-tau) with the docker-compose-codex.yaml Thank you for reading, and for your support if you are willing!
2025-04-18T08:22:26
https://i.redd.it/1o9uknihzjve1.jpeg
Original_Finding2212
i.redd.it
1970-01-01T00:00:00
0
{}
1k1zvym
false
null
t3_1k1zvym
/r/LocalLLaMA/comments/1k1zvym/openai_api_codex_connector/
false
false
https://b.thumbs.redditm…Avb_8siwrnKI.jpg
2
{'enabled': True, 'images': [{'id': 'oVGHFRK0QTsbdtfmYKbruVRF_KMxdU7VEVt0uwfa4BM', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?width=108&crop=smart&auto=webp&s=90a80822975df03af34379ed55c2514a46e6ffd9', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?width=216&crop=smart&auto=webp&s=a65d47aa54e9aa1894c9d4e8d500f9d42707f7c5', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?width=320&crop=smart&auto=webp&s=e463ed21ca7f1f586ce11c175d77d08bf585256f', 'width': 320}, {'height': 487, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?width=640&crop=smart&auto=webp&s=3b0a7d22bbf9047c3508aa582e5c2db28cf88dc5', 'width': 640}, {'height': 730, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?width=960&crop=smart&auto=webp&s=5b00be5742670af675f2ca15552dc04db4093b41', 'width': 960}], 'source': {'height': 819, 'url': 'https://preview.redd.it/1o9uknihzjve1.jpeg?auto=webp&s=c49d50c62839458965e0166289a7b7f58ce3cb14', 'width': 1076}, 'variants': {}}]}
Grok is cheaper & better than DeepSeek
0
Source : ArtificialAnlysis
2025-04-18T08:24:48
https://i.redd.it/es97wnytzjve1.jpeg
BidHot8598
i.redd.it
1970-01-01T00:00:00
0
{}
1k1zx3j
false
null
t3_1k1zx3j
/r/LocalLLaMA/comments/1k1zx3j/grok_is_cheaper_better_than_deepseek/
false
false
https://a.thumbs.redditm…zKUEqt6mtwU4.jpg
0
{'enabled': True, 'images': [{'id': 'G4oZHzySJk989owvp7oqAciPdU2PjF9kSg8UVzK6pyw', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/es97wnytzjve1.jpeg?width=108&crop=smart&auto=webp&s=315ebb93a8662b092c3de67cb91f3391925bff64', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/es97wnytzjve1.jpeg?width=216&crop=smart&auto=webp&s=21ca5d7c89dfaa28da3b1a246b07d0fe80fde8eb', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/es97wnytzjve1.jpeg?width=320&crop=smart&auto=webp&s=40525d54db21444bac40e6592f972194824b5c0b', 'width': 320}, {'height': 769, 'url': 'https://preview.redd.it/es97wnytzjve1.jpeg?width=640&crop=smart&auto=webp&s=777151dcb7e0e6ae384b2140bc88d07ece28d2f2', 'width': 640}], 'source': {'height': 915, 'url': 'https://preview.redd.it/es97wnytzjve1.jpeg?auto=webp&s=602a71cc9e369b5d934be2a82d080ecc5d5a1e69', 'width': 761}, 'variants': {}}]}
Best open source models ?
5
What are your top and best open source models ? And why ? no size restrictions .
2025-04-18T08:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1k20ayu/best_open_source_models/
Basic-Pay-9535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k20ayu
false
null
t3_1k20ayu
/r/LocalLLaMA/comments/1k20ayu/best_open_source_models/
false
false
self
5
null
st-r-awbe-rr-y
0
Help
2025-04-18T08:53:56
https://i.redd.it/o89gxfp35kve1.jpeg
Infinite-Finance-515
i.redd.it
1970-01-01T00:00:00
0
{}
1k20ba4
false
null
t3_1k20ba4
/r/LocalLLaMA/comments/1k20ba4/strawberry/
false
false
https://b.thumbs.redditm…SX84HA2neBxk.jpg
0
{'enabled': True, 'images': [{'id': 'M5VgcNmupiqstWzpuigzCaRumV2yT-a9QDmO-U9BDZw', 'resolutions': [{'height': 149, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=108&crop=smart&auto=webp&s=3b9b5490513105674559688157f1b337b45d9576', 'width': 108}, {'height': 299, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=216&crop=smart&auto=webp&s=1f0b8d4d0f6944a1b23db7a68a65dd5f98ae02c9', 'width': 216}, {'height': 444, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=320&crop=smart&auto=webp&s=9e006006ce3894b01683470b7c544d4fbb52d8af', 'width': 320}, {'height': 888, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=640&crop=smart&auto=webp&s=e1ee035d7a7c5c44ef566b8307ddd221ffb0f42f', 'width': 640}, {'height': 1332, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=960&crop=smart&auto=webp&s=af162a10d61ffa3b83c0cf368d344046e2f3f15b', 'width': 960}, {'height': 1498, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?width=1080&crop=smart&auto=webp&s=89c5e4717bcb7d24b1c8c47dc18610b02caaf4dc', 'width': 1080}], 'source': {'height': 1561, 'url': 'https://preview.redd.it/o89gxfp35kve1.jpeg?auto=webp&s=adc1ba4bcd874fa575e5ad43153a89084e68979f', 'width': 1125}, 'variants': {}}]}
Is there a small tool-calling LLM?
15
So basically i want to do an LLM game engine that resolves missing stuff via an llm. For that i need an LLM which complies with tool calling and actually calls tools whenever there's an opportunity. Is there such an LLM, that's small enough to not boil my room? Ideally a 7B one, it just needs to follow instructions it gets from tool calls.
2025-04-18T10:47:56
https://www.reddit.com/r/LocalLLaMA/comments/1k21yub/is_there_a_small_toolcalling_llm/
ashleigh_dashie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k21yub
false
null
t3_1k21yub
/r/LocalLLaMA/comments/1k21yub/is_there_a_small_toolcalling_llm/
false
false
self
15
null
Good news: 5090s now in stock in my local market. Bad news: cheapest is $3,550
49
Now I wonder if I sshould have just bought the 2nd hand 3090s that were on sale for $700.
2025-04-18T11:13:40
https://www.reddit.com/r/LocalLLaMA/comments/1k22e41/good_news_5090s_now_in_stock_in_my_local_market/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k22e41
false
null
t3_1k22e41
/r/LocalLLaMA/comments/1k22e41/good_news_5090s_now_in_stock_in_my_local_market/
false
false
self
49
null
OpenAI naming is so confusing they need to include explanations inside Codex CLI system prompt
17
I was going through Codex CLI system prompt and found this gem. As a reminder OpenAI released Codex LLM tuned for coding couple of years back. Here’s the excerpt: “The Codex CLI is open-sourced. Don't confuse yourself with the old Codex language model built by OpenAI many moons ago (this is understandably top of mind for you!). Within this context, Codex refers to the open-source agentic coding interface.”
2025-04-18T11:20:26
https://github.com/openai/codex/blob/main/codex-cli/src/utils/agent/agent-loop.ts
fixtwin
github.com
1970-01-01T00:00:00
0
{}
1k22i44
false
null
t3_1k22i44
/r/LocalLLaMA/comments/1k22i44/openai_naming_is_so_confusing_they_need_to/
false
false
https://a.thumbs.redditm…DxPxun2qneK4.jpg
17
{'enabled': False, 'images': [{'id': 'DHbmFtBH5i-qv266Zeh2R316QgFWX3u4FD4L1BwZtio', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=108&crop=smart&auto=webp&s=8f204065a39624c33e82bc63cb8fffce7b281393', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=216&crop=smart&auto=webp&s=b09ea16ef924fc3edd24be92f194eeba6c2476ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=320&crop=smart&auto=webp&s=68797f4f7815b7e6f7ec1716ab96c6c2da0c6769', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=640&crop=smart&auto=webp&s=dfa0db6f6bef47eb4fc9a05e50dd4119e49a45df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=960&crop=smart&auto=webp&s=e5aa440e535d77e216ba813325979549e9d85de6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?width=1080&crop=smart&auto=webp&s=d460e3769d05f31bfa7047ea732a3b4868da06af', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GzGxq7sjEIOdqa06p0Tb5ISsD5qYfTkIq7FrR23kigg.jpg?auto=webp&s=59be9ec96100875714833fcfa9fad49155b912f3', 'width': 1200}, 'variants': {}}]}
FULL LEAKED Replit Agent System Prompts and Tools
69
(Latest system prompt: 18/04/2025) I managed to get full official Replit Agent system prompts, including its tools (JSON). Over 400 lines. You can check it out at: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
2025-04-18T11:26:40
https://www.reddit.com/r/LocalLLaMA/comments/1k22lyx/full_leaked_replit_agent_system_prompts_and_tools/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k22lyx
false
null
t3_1k22lyx
/r/LocalLLaMA/comments/1k22lyx/full_leaked_replit_agent_system_prompts_and_tools/
false
false
self
69
null
How to get started with the new Dream 7B diffusion reasoning model
1
[removed]
2025-04-18T11:52:10
https://www.youtube.com/watch?v=xyLuihgD9BA
ai-local
youtube.com
1970-01-01T00:00:00
0
{}
1k231vr
false
{'oembed': {'author_name': "Brian's Linux and AIX Videos", 'author_url': 'https://www.youtube.com/@ixbrian0', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/xyLuihgD9BA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to get started with the Dream 7B diffusion reasoning model"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/xyLuihgD9BA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How to get started with the Dream 7B diffusion reasoning model', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1k231vr
/r/LocalLLaMA/comments/1k231vr/how_to_get_started_with_the_new_dream_7b/
false
false
https://b.thumbs.redditm…wxvcXYlL5Lag.jpg
1
{'enabled': False, 'images': [{'id': 'pA0-5Cn1x5Xd57rNrpcD5PCXpaIBGse8rwqFaJgyd6k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VmG8y_6yIHgSOZvKQdnqMhlkmXZtG5GcwOTyIXxA4rc.jpg?width=108&crop=smart&auto=webp&s=ce06c3a733e6f6a39ca40277bc563e05663f1569', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/VmG8y_6yIHgSOZvKQdnqMhlkmXZtG5GcwOTyIXxA4rc.jpg?width=216&crop=smart&auto=webp&s=e17d850133eab0488379f8f3e8b96c8ae08c585c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/VmG8y_6yIHgSOZvKQdnqMhlkmXZtG5GcwOTyIXxA4rc.jpg?width=320&crop=smart&auto=webp&s=1623aacf898adb02968e5de20711f3755faf4bb2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/VmG8y_6yIHgSOZvKQdnqMhlkmXZtG5GcwOTyIXxA4rc.jpg?auto=webp&s=9c0f92332d0db33c58693d63dee8242874df0429', 'width': 480}, 'variants': {}}]}
Does anyone else feel guilty using big models for tiny tasks?
18
I don't know if anyone else feels this way, but sometimes when I use a huge model for something super simple, I feel bad, like I'm wasting resources or something. It feels like these LLMs are way too powerful for little tasks, and I shouldn't be wasting their "time" (even though I know it's not alive lol) or the computational resources. Because of that, I set up Gemma 3 locally and now I use it for all my tiny tasks. I can't fully explain why I feel like this — it's not really logical — but it's there. Does anyone else feel the same way?
2025-04-18T12:02:06
https://www.reddit.com/r/LocalLLaMA/comments/1k238lw/does_anyone_else_feel_guilty_using_big_models_for/
RightCup5772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k238lw
false
null
t3_1k238lw
/r/LocalLLaMA/comments/1k238lw/does_anyone_else_feel_guilty_using_big_models_for/
false
false
self
18
null
Fuzzy quant scaling for dynamic reasoning steps.
0
Hear me out, and you geniuses may understand. So as part of reasoning it's valuable to step back from the immediate issue and be a little more broad and encompassing. What would be the effect of adding a controlled and intelligently scaled amount of noise to the weights during inference? Maybe just inside specific trigger tags you fudge the math a little to produce a slightly noisy gradient? Could this gentle fuzz lead to better reasoning divergence while maintaining coherence and staying near topic? It's important to note that I don't mean consistent changes, I mean dynamic and optional fuzzy weights per token with some type of controls for activation and curve. Do something fancy with the context data to optimize per token or something. My expectation is someone smarter than me will know more exactly about how the math works. All I know for sure about how the math shakes out is if you shoot some marbles onto 10B semi directional pinball bumpers and collect the marbles that escape there will be areas where lots of marbles stop together and the decoder layer turns that into numbers that relate to words or groups of words and their probability: [ [306627" cow",0.7673],[100837" chocolate milk", 0.19631]] The prompt controls how and where you shoot the marbles, there are 128k or 32k holes around the perimeter per model. One for each vocabulary token. Just a wee noise to simulate the jostle and consistent yet unpredictable real pinball experience and shake the really certain models up a bit that isn't based around random sampling the final outs. Might be something to gain. Might be nonsense. I can't decide if it's gibberish or if it might help in reasoning and review on some models and tasks. Anyway, cool chat. I'm probably ignorant of a large barrier to implementation and speed would lilely be significantly degraded. I don't have time or quiet to sink into the code. It's on you guys. Thanks for reading.
2025-04-18T12:05:59
https://www.reddit.com/r/LocalLLaMA/comments/1k23b3a/fuzzy_quant_scaling_for_dynamic_reasoning_steps/
aseichter2007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k23b3a
false
null
t3_1k23b3a
/r/LocalLLaMA/comments/1k23b3a/fuzzy_quant_scaling_for_dynamic_reasoning_steps/
false
false
self
0
null
RTX 3090 24GB gotten! Initial panic but all good - now some AI practic🥳
1
[removed]
2025-04-18T12:11:34
https://www.reddit.com/gallery/1k23eto
hsangh
reddit.com
1970-01-01T00:00:00
0
{}
1k23eto
false
null
t3_1k23eto
/r/LocalLLaMA/comments/1k23eto/rtx_3090_24gb_gotten_initial_panic_but_all_good/
false
false
https://b.thumbs.redditm…SOhmO45Cb51Y.jpg
1
null
Do any of the gpt-4.1 or o3/o4 models in the API support search?
0
I'm not sure if I'm doing something wrong, but querying gpt-4.1 and o4-mini through the openai API endpoint doesn't seem to be able to search up anything online (the way the gpt-4o-search-preview-2025-03-11 model could). Do these models support search by themselves, or do we need to have it call a tool?
2025-04-18T12:19:35
https://www.reddit.com/r/LocalLLaMA/comments/1k23ka1/do_any_of_the_gpt41_or_o3o4_models_in_the_api/
jaxchang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k23ka1
false
null
t3_1k23ka1
/r/LocalLLaMA/comments/1k23ka1/do_any_of_the_gpt41_or_o3o4_models_in_the_api/
false
false
self
0
null
Awesome Multimodal Search
1
2025-04-18T12:51:35
https://github.com/mixpeek/awesome-multimodal-search/tree/main
Feeling_Mushroom3175
github.com
1970-01-01T00:00:00
0
{}
1k246fl
false
null
t3_1k246fl
/r/LocalLLaMA/comments/1k246fl/awesome_multimodal_search/
false
false
https://b.thumbs.redditm…-454wx4ve9Zg.jpg
1
{'enabled': False, 'images': [{'id': 'Ytl6kOE1ZqMaq3XsQrD1Fc6sPgquuYxRZRKPaos5zIo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=108&crop=smart&auto=webp&s=a40cc21dadcc04f081d74c3d7fde4668482b31b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=216&crop=smart&auto=webp&s=51ba1842588ca38a822a2a07a754fed73bdf973c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=320&crop=smart&auto=webp&s=c686bc90b7d9dd93d4d4344409c5074b91df8207', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=640&crop=smart&auto=webp&s=ed3816a766fa98f43a2068d66d9fd141927936f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=960&crop=smart&auto=webp&s=de6cf5a2223d733b2e71d9eb992ecb3c990a5d84', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?width=1080&crop=smart&auto=webp&s=ff40b5a58f6a13a2403c32f3e95ceecf22d1211b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LOFclA20IL7Fe3sJelaVN97hCHjrTKlGmcvvV42ydoI.jpg?auto=webp&s=42d3f0fb91055b194ed9a3810a7c9cd74cb03ca2', 'width': 1200}, 'variants': {}}]}
Google’s Agent2Agent (A2A) Explained
8
Hey everyone, Just published a new \*FREE\* blog post on Agent-to-Agent (A2A) – Google’s new framework letting AI systems collaborate like human teammates rather than working in isolation. In this post, I explain: \- Why specialized AI agents need to talk to each other \- How A2A compares to MCP and why they're complementary \- The essentials of A2A I've kept it accessible with real-world examples like planning a birthday party. This approach represents a fundamental shift where we'll delegate to teams of AI agents working together rather than juggling specialized tools ourselves. Link to the full blog post: [https://open.substack.com/pub/diamantai/p/googles-agent2agent-a2a-explained?r=336pe4&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=false](https://open.substack.com/pub/diamantai/p/googles-agent2agent-a2a-explained?r=336pe4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false)
2025-04-18T12:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1k249wy/googles_agent2agent_a2a_explained/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k249wy
false
null
t3_1k249wy
/r/LocalLLaMA/comments/1k249wy/googles_agent2agent_a2a_explained/
false
false
self
8
{'enabled': False, 'images': [{'id': 'HHQrUWrKmhwThNg5GCEs9lUuU6WlZCoWcmzSuCgWtl0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=108&crop=smart&auto=webp&s=be34a9fbe8beb066b7df53b93ac4ce81b3508bff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=216&crop=smart&auto=webp&s=648a49adf32501e9c4294b391ec9e384c4b27483', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=320&crop=smart&auto=webp&s=c3035804d990ee7a47cb5de2cb2f9837ae205bcc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=640&crop=smart&auto=webp&s=7edcc2ffa0b0ebe465fb68ab5e28677c7337bf0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=960&crop=smart&auto=webp&s=312795e868044a8d85e023af4d6e37117f0fcc1a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?width=1080&crop=smart&auto=webp&s=d15426445d9355366fd001aa6497afbcb7360916', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7QXEOnqrCYHOKK7JNIbEmqU70dUZsY4OiMRh_DIUbTI.jpg?auto=webp&s=02e43352ed6fc6cd3ac312638c73bdb70d1df9d3', 'width': 1200}, 'variants': {}}]}
Is there a tutorial on how to write a CYOA story?
1
[removed]
2025-04-18T12:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1k24alh/is_there_a_tutorial_on_how_to_write_a_cyoa_story/
BlacksmithRadiant322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k24alh
false
null
t3_1k24alh
/r/LocalLLaMA/comments/1k24alh/is_there_a_tutorial_on_how_to_write_a_cyoa_story/
false
false
self
1
null
I know these aren't local but Veo2 is just insane, I hope we get this level of quality locally soon.
0
https://imgur.com/0i40ybQ https://imgur.com/HB4TrMA
2025-04-18T13:10:00
https://www.reddit.com/r/LocalLLaMA/comments/1k24jz3/i_know_these_arent_local_but_veo2_is_just_insane/
Sea_Sympathy_495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k24jz3
false
null
t3_1k24jz3
/r/LocalLLaMA/comments/1k24jz3/i_know_these_arent_local_but_veo2_is_just_insane/
false
false
self
0
{'enabled': False, 'images': [{'id': '-k9Y1ztr1uI0J93nErMFTquhyBDMVWHmDvyyXMWEybw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/wEhelBhsvvAWRHZB_HFVDxJqrHitHhxugr2aKVG595Y.jpg?width=108&crop=smart&auto=webp&s=57e93892a5679b3a9a20003c2cba7a31f6ee46f1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/wEhelBhsvvAWRHZB_HFVDxJqrHitHhxugr2aKVG595Y.jpg?width=216&crop=smart&auto=webp&s=19764701ebc6e4dd189c8c2de288fa5145fdf918', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/wEhelBhsvvAWRHZB_HFVDxJqrHitHhxugr2aKVG595Y.jpg?width=320&crop=smart&auto=webp&s=74a74b82414061d80a91f83ca96adebe2ba38610', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/wEhelBhsvvAWRHZB_HFVDxJqrHitHhxugr2aKVG595Y.jpg?width=640&crop=smart&auto=webp&s=34413dd84410481b2611b07986ca12125ef9e32b', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/wEhelBhsvvAWRHZB_HFVDxJqrHitHhxugr2aKVG595Y.jpg?auto=webp&s=ee0f7ea8b8d8d0a2288735ac632dca9642308d3f', 'width': 854}, 'variants': {}}]}
It’s merge time! - What are your top tips for using MergeKit?
3
I recently posted about the [release of Minueza 2](https://www.reddit.com/r/LocalLLaMA/comments/1jt3vvq/minueza296m_a_foundation_bilingual_textgeneration/), and today I completed the upload of [10 different fine-tunes](https://huggingface.co/collections/Felladrin/minueza-2-96m-67f2ca2b07a3da4558d5c3f6), ready for merging experiments! I’m aware of simple merges, as listed in the [examples of mergekit](https://github.com/arcee-ai/mergekit/tree/main/examples), and complex ones, as described in the [model card of Qwen2.5-14B-Vimarckoso-v3](https://huggingface.co/sometimesanotion/Qwen2.5-14B-Vimarckoso-v3), but I’d love to learn about other techniques that you’ve been using.
2025-04-18T13:12:22
https://www.reddit.com/r/LocalLLaMA/comments/1k24lpi/its_merge_time_what_are_your_top_tips_for_using/
Felladrin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k24lpi
false
null
t3_1k24lpi
/r/LocalLLaMA/comments/1k24lpi/its_merge_time_what_are_your_top_tips_for_using/
false
false
self
3
{'enabled': False, 'images': [{'id': '0CLwgDbHKWpqtgtG55PGvmG-ZU6b1YME02BJV4MksQY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=108&crop=smart&auto=webp&s=df33122d7aaac29c5778d055252c1e4e7c33664a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=216&crop=smart&auto=webp&s=032d943e1d33a055b20725a674925a46e232517a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=320&crop=smart&auto=webp&s=9898e804e91b8d185b9207c2506c02e375dfce8a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=640&crop=smart&auto=webp&s=8f9d4c4ca769f70f6c10ebc40ae88af3ce904ce1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=960&crop=smart&auto=webp&s=954a9ce06458e0baa46a626aa2dc00d97823291b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?width=1080&crop=smart&auto=webp&s=0e49922fc5e9e882a3970ee8286fda69995f08df', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jaXYFGaAf2FAsz7LbBPArTI_Q5pIKGEKWI2XZLpzS0M.jpg?auto=webp&s=e871f9d0d99f8805245ae829f1d40a2d6fd15246', 'width': 1200}, 'variants': {}}]}
Can I run any LLM on my potato laptop?
3
I have i5 a laptop with 8gm, is it possible to run any model on it ? if so.. then which one?
2025-04-18T13:16:28
https://www.reddit.com/r/LocalLLaMA/comments/1k24osy/can_i_run_any_llm_on_my_potato_laptop/
Both-Drama-8561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k24osy
false
null
t3_1k24osy
/r/LocalLLaMA/comments/1k24osy/can_i_run_any_llm_on_my_potato_laptop/
false
false
self
3
null
Judging Embeddings
0
When considering embeddings, it can be a helpful qualitative tool to check the top-k most similar results in a neighborhood of your query samples. This can help you find clear themes and patterns to explain how your model organizes the data. But its a slow, subjective technique so I'm thinking about applying VLM-as-a-Judge, prompting AI to identify themes explaining the cluster and scoring it quantitatively. Zero-shot without much experimenting with the prompt for a generic model but the technique looks promising. I tried this idea on my custom theatrical poster embeddings, made before CLIP was open-sourced. Can Judging Embeddings help make your AI experiments more explainable?
2025-04-18T13:16:47
https://www.reddit.com/gallery/1k24p1z
remyxai
reddit.com
1970-01-01T00:00:00
0
{}
1k24p1z
false
null
t3_1k24p1z
/r/LocalLLaMA/comments/1k24p1z/judging_embeddings/
false
false
https://b.thumbs.redditm…_nX0_aOtXMzk.jpg
0
null
Gemma 3 QAT launch with MLX, llama.cpp, Ollama, LM Studio, and Hugging Face
204
Hi! Some weeks ago we released GGUFs corresponding to the QAT checkpoints of Gemma 3. Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements to load the model. That is, QAT is an additional fine-tuning that makes the model more rigorous to quantization. As we only released the GGUFs, we got feedback that it would be great to have the unquantized QAT-based checkpoints to allow people to quantize for their own tools. So...we did it! Today we're releasing the unquantized QAT-based checkpoints. The models preserve quality better than naive quantization. **We also collaborated with Prince (from MLX), llama.cpp, Ollama, LM Studio, and Hugging Face to make sure you can use the models in all your favorite tools!** * Blog post : [https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/](https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/) * Unquantized checkpoints: [https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b](https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b) * Ollama: [https://ollama.com/library/gemma3](https://ollama.com/library/gemma3) (try ollama run gemma3:12b-it-qat) * LM Studio: [https://lmstudio.ai/model/gemma-3-12b-it-qat](https://lmstudio.ai/model/gemma-3-12b-it-qat) * MLX: [https://huggingface.co/collections/mlx-community/gemma-3-qat-68002674cd5afc6f9022a0ae](https://huggingface.co/collections/mlx-community/gemma-3-qat-68002674cd5afc6f9022a0ae) * llama.cpp: [https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b](https://huggingface.co/collections/google/gemma-3-qat-67ee61ccacbf2be4195c265b) Enjoy!
2025-04-18T13:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1k250fu/gemma_3_qat_launch_with_mlx_llamacpp_ollama_lm/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k250fu
false
null
t3_1k250fu
/r/LocalLLaMA/comments/1k250fu/gemma_3_qat_launch_with_mlx_llamacpp_ollama_lm/
false
false
self
204
{'enabled': False, 'images': [{'id': 'UPh_4CgafUqTh9ZB3bC0-0Msh-CF5QgkiP-Ex1y8M_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=108&crop=smart&auto=webp&s=bf80d9b78a582598ddaf46ebb198ba14da0dfee1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=216&crop=smart&auto=webp&s=1ad09b95d0279438bd66d1d418f3f9e0b207e8d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=320&crop=smart&auto=webp&s=75848e136c8a8aa2ea0df8ba00019ebccfebb3fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=640&crop=smart&auto=webp&s=ed6a861b423ef5ef481e863b5c6947b3cef14c0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=960&crop=smart&auto=webp&s=7486c5a54c0c3728faa8358805c8f52cd7e039fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=1080&crop=smart&auto=webp&s=c783ae4f78dd78266208a25dc198c7d56dcb9de7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?auto=webp&s=b9fa62cfd071dc2a391de1c697f0bfbb56d04afa', 'width': 1200}, 'variants': {}}]}
New QAT-optimized int4 Gemma 3 models by Google, slash VRAM needs (54GB -> 14.1GB) while maintaining quality.
352
2025-04-18T13:32:01
https://developers.googleblog.com/en/gemma-3-quantized-aware-trained-state-of-the-art-ai-to-consumer-gpus/?linkId=14034718
Sea_Sympathy_495
developers.googleblog.com
1970-01-01T00:00:00
0
{}
1k250r6
false
null
t3_1k250r6
/r/LocalLLaMA/comments/1k250r6/new_qatoptimized_int4_gemma_3_models_by_google/
false
false
https://b.thumbs.redditm…OS3OttpBDorM.jpg
352
{'enabled': False, 'images': [{'id': 'UPh_4CgafUqTh9ZB3bC0-0Msh-CF5QgkiP-Ex1y8M_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=108&crop=smart&auto=webp&s=bf80d9b78a582598ddaf46ebb198ba14da0dfee1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=216&crop=smart&auto=webp&s=1ad09b95d0279438bd66d1d418f3f9e0b207e8d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=320&crop=smart&auto=webp&s=75848e136c8a8aa2ea0df8ba00019ebccfebb3fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=640&crop=smart&auto=webp&s=ed6a861b423ef5ef481e863b5c6947b3cef14c0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=960&crop=smart&auto=webp&s=7486c5a54c0c3728faa8358805c8f52cd7e039fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?width=1080&crop=smart&auto=webp&s=c783ae4f78dd78266208a25dc198c7d56dcb9de7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5lq32BTIzHqmPYcHvNrCp8JMhag9gsSSkR3cQgoYZBU.jpg?auto=webp&s=b9fa62cfd071dc2a391de1c697f0bfbb56d04afa', 'width': 1200}, 'variants': {}}]}
Google new Gemma 3 QAT - optimized int4 models slash VRAM needs (54GB -> 14.1GB) while maintaining quality
1
[removed]
2025-04-18T13:37:23
https://i.redd.it/vx4tvzerilve1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1k254t2
false
null
t3_1k254t2
/r/LocalLLaMA/comments/1k254t2/google_new_gemma_3_qat_optimized_int4_models/
false
false
https://a.thumbs.redditm…SkdJOlmNQsQ8.jpg
1
{'enabled': True, 'images': [{'id': 'dZkxbbLwH-IffOEaILftM1VFyPW3z_BszqFCxXS7Tnc', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=108&crop=smart&auto=webp&s=9d6c8e7230d997d524d80bb9ed325e8e17271ff2', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=216&crop=smart&auto=webp&s=9ee381be40748a1dfca9819277d599b56e7a04a4', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=320&crop=smart&auto=webp&s=de9d852234e343655c5c2702129d32b4bd7e0f2f', 'width': 320}, {'height': 446, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=640&crop=smart&auto=webp&s=531f96adaf06b8da3bbfec267400d57b028e62c5', 'width': 640}, {'height': 670, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=960&crop=smart&auto=webp&s=48e3ac5c26b8d5258662e69bdc014a2ad4a1bfc2', 'width': 960}, {'height': 754, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?width=1080&crop=smart&auto=webp&s=7417140a93b5ecc9deed75720f929540097d8d41', 'width': 1080}], 'source': {'height': 1410, 'url': 'https://preview.redd.it/vx4tvzerilve1.jpeg?auto=webp&s=83321526bc952ad2e0c19f1790ff995cf3852fe3', 'width': 2019}, 'variants': {}}]}
Google QAT - optimized int4 Gemma 3 slash VRAM needs (54GB -> 14.1GB) while maintaining quality - llama.cpp, lmstudio, MLX, ollama
701
2025-04-18T13:41:47
https://i.redd.it/23ut7jd3klve1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1k25876
false
null
t3_1k25876
/r/LocalLLaMA/comments/1k25876/google_qat_optimized_int4_gemma_3_slash_vram/
false
false
https://b.thumbs.redditm…n-3R3ViOqnjw.jpg
701
{'enabled': True, 'images': [{'id': 'WpG3vdmaKqlrkdv92vZkx1C4XsRcB60nvCuzf1WOIVE', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=108&crop=smart&auto=webp&s=d24a72cfeba57d1329030822c53b165171d81cc7', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=216&crop=smart&auto=webp&s=f4249818f6ce5f595fc6fd8bf7119aa05e1249ea', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=320&crop=smart&auto=webp&s=cbdff285fa824e92fc9f36614da8efdb2852072f', 'width': 320}, {'height': 446, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=640&crop=smart&auto=webp&s=f940165ab5ba660103d9f5f61872b1dc70698cbb', 'width': 640}, {'height': 670, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=960&crop=smart&auto=webp&s=f0bd9f243f94109afab58822dcdd571b442e61ef', 'width': 960}, {'height': 754, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?width=1080&crop=smart&auto=webp&s=51670d55165b6bdea049b26ad1e0e9d1cc0c630b', 'width': 1080}], 'source': {'height': 1410, 'url': 'https://preview.redd.it/23ut7jd3klve1.jpeg?auto=webp&s=e3e2f50b8702624aa44bca7507d152bb8230ab03', 'width': 2019}, 'variants': {}}]}
How to Improve Search Accuracy in a Retrieval System?
5
Hey everyone, I’m working on a small RAG setup that lets users search vehicle‑event image captions (e.g., “driver wearing red”). I’m using Milvus’s hybrid search with BAAI/bge‑m3 to generate both dense and sparse embeddings, but I keep running into accuracy issues. For example, it often returns captions about “red vehicle” where the driver is wearing a completely different color—even with very high scores. I also tried adding a reranker (BAAI/bge‑reranker‑v2‑m3), but noticed no improvement. What I need help with: * How can I get more precise results for my use-case? * How do you evaluate search accuracy in this context? Is there an existing framework or set of metrics I can use? I’d really appreciate any advice or examples. Thanks!
2025-04-18T13:45:30
https://www.reddit.com/r/LocalLLaMA/comments/1k25b0v/how_to_improve_search_accuracy_in_a_retrieval/
Traditional_Tap1708
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k25b0v
false
null
t3_1k25b0v
/r/LocalLLaMA/comments/1k25b0v/how_to_improve_search_accuracy_in_a_retrieval/
false
false
self
5
null
Privacy of local, Power of cloud?
1
[removed]
2025-04-18T13:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1k25mdw/privacy_of_local_power_of_cloud/
judethedude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k25mdw
false
null
t3_1k25mdw
/r/LocalLLaMA/comments/1k25mdw/privacy_of_local_power_of_cloud/
false
false
self
1
null
GPT 4.1 is a game changer
0
I've been working on a few multilingual text forecasting projects for a while now. I have been a staunch user of Llama 3.1 8B just based on how well it does after fine-tuning on my (pretty difficult) forecasting benchmarks. My ROC-AUCs have hovered close to 0.8 for the best models. Llama 3.1 8B performed comparably to GPT-4o *and* GPT-4o-mini, so I had written off my particular use case as too difficult for bigger models. I fine-tuned GPT 4.1 earlier today and achieved an ROC-AUC of 0.94. This is a game changer; it essentially "solves" my particular class of problems. I have to get rid of an entire Llama-based reinforcement learning pipeline I literally just built over the past month. This is just a PSA if any of you are considering whether it's worth fine-tuning GPT 4.1. It cost me a few $100s for both fine-tuning and inference. My H100 GPU cost $25,000 and I'm now regretting the purchase. I didn't believe in model scaling laws, now I do.
2025-04-18T14:07:35
https://www.reddit.com/r/LocalLLaMA/comments/1k25suh/gpt_41_is_a_game_changer/
entsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k25suh
false
null
t3_1k25suh
/r/LocalLLaMA/comments/1k25suh/gpt_41_is_a_game_changer/
false
false
self
0
null
Multi-Node Cluster Deployment of Qwen Series Models with SGLang
4
### Objective While `Ollama` offers convenience, high concurrency is sometimes more crucial. This article demonstrates how to deploy `SGLang` on two computers (dual nodes) to run the `Qwen2.5-7B-Instruct` model, maximizing local resource utilization. Additional nodes can be added if available. ### Hardware Requirements - Node 0: IP `192.168.0.12`, 1 NVIDIA GPU - Node 1: IP `192.168.0.13`, 1 NVIDIA GPU - Total: 2 GPUs ### Model Specifications `Qwen2.5-7B-Instruct` requires approximately 14GB VRAM in FP16. With `--tp 2`, each GPU needs about 7GB (weights) + 2-3GB (KV cache). ### Network Configuration Nodes communicate via Ethernet (TCP), using the eno1 network interface. > Note: Check your actual interface using ip addr command ### Precision Using FP16 precision to maintain maximum accuracy, resulting in higher VRAM usage that requires optimization. ## 2. Prerequisites Ensure the following requirements are met before installation and deployment: ### Operating System - Recommended: Ubuntu 20.04/22.04 or other Linux distributions (Windows not recommended, requires WSL2) - Consistent environments across nodes preferred, though OS can differ if Python environments match ### Network Connectivity - Node 0 (192.168.0.12) and Node 1 (192.168.0.13) must be able to ping each other: ```shell ping 192.168.0.12 # from Node 1 ping 192.168.0.13 # from Node 0 ``` - Ports 50000 (distributed initialization) and 30000 (HTTP server) must not be blocked by firewall: ```bash sudo ufw allow 50000 sudo ufw allow 30000 ``` - Verify network interface eno1: ```bash # Adjust interface name as needed ip addr show eno1 ``` If `eno1` doesn't exist, use your actual interface (e.g., `eth0` or `enp0s3`). ### GPU Drivers and CUDA - Install NVIDIA drivers (version ≥ 470) and `CUDA Toolkit` (12.x recommended): ```bash nvidia-smi # verify driver and CUDA version ``` Output should show NVIDIA and CUDA versions (e.g., 12.4). If not installed, refer to NVIDIA's official website for installation. ### Python Environment - Python 3.9+ (3.10 recommended) - Consistent Python versions across nodes: ```bash python3 --version ``` ### Disk Space - `Qwen2.5-7B-Instruct` model requires approximately 15GB disk space - Ensure sufficient space in `/opt/models/Qwen/Qwen2.5-7B-Instruct` path ## 3. Installing SGLang Install SGLang and dependencies on both nodes. Execute the following steps on each computer. ### 3.1 Create Virtual Environment (conda) ```bash conda create -n sglang_env python=3.10 conda activate sglang_env ``` ### 3.2 Install SGLang > Note: Installation will automatically include GPU-related dependencies like `torch`, `transformers`, `flashinfer` ```bash pip install --upgrade pip pip install uv uv pip install "sglang[all]>=0.4.5" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python ``` Verify installation: ```bash python -m sglang.launch_server --help ``` Should display SGLang's command-line parameter help information. ### 3.3 Download Qwen2.5-7B-Instruct Model > Use `huggingface` internationally, `modelscope` within China Download the model to the same path on both nodes (e.g., `/opt/models/Qwen/Qwen2.5-7B-Instruct`): ```bash pip install modelscope modelscope download Qwen/Qwen2.5-7B-Instruct --local-dir /opt/models/Qwen/Qwen2.5-7B-Instruct ``` Alternatively, manually download from `Hugging Face` or `modelscope` and extract to the specified path. Ensure model files are identical across nodes. ## 4. Configuring Dual-Node Deployment Use tensor parallelism (--tp 2) to distribute the model across 2 GPUs (one per node). Below are the detailed deployment steps and commands. ### 4.1 Deployment Commands - Node 0 (IP: 192.168.0.12): ```bash NCCL_IB_DISABLE=1 NCCL_P2P_DISABLE=1 GLOO_SOCKET_IFNAME=eno1 NCCL_SOCKET_IFNAME=eno1 python3 -m sglang.launch_server \ --model-path /opt/models/Qwen/Qwen2.5-7B-Instruct \ --tp 2 \ --nnodes 2 \ --node-rank 0 \ --dist-init-addr 192.168.0.12:50000 \ --disable-cuda-graph \ --host 0.0.0.0 \ --port 30000 \ --mem-fraction-static 0.7 ``` - Node 1 (IP: 192.168.0.13): ```bash NCCL_IB_DISABLE=1 NCCL_P2P_DISABLE=1 GLOO_SOCKET_IFNAME=eno1 NCCL_SOCKET_IFNAME=eno1 python3 -m sglang.launch_server \ --model-path /opt/models/Qwen/Qwen2.5-7B-Instruct \ --tp 2 \ --nnodes 2 \ --node-rank 1 \ --dist-init-addr 192.168.0.12:50000 \ --disable-cuda-graph \ --host 0.0.0.0 \ --port 30000 \ --mem-fraction-static 0.7 ``` > Note: If OOM occurs, adjust the `--mem-fraction-static` parameter from the default 0.9 to 0.7. This change reduces VRAM usage by about 2GB for the current 7B model. > CUDA Graph allocates additional VRAM (typically hundreds of MB) to store computation graphs. If VRAM is near capacity, enabling CUDA Graph may trigger OOM errors. ## Additional Parameters and Information **[Original Article](https://stable-learn.com/en/sglang-muti-nodes-deploy/)**
2025-04-18T14:55:00
https://www.reddit.com/r/LocalLLaMA/comments/1k26vvg/multinode_cluster_deployment_of_qwen_series/
suke-wangsr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k26vvg
false
null
t3_1k26vvg
/r/LocalLLaMA/comments/1k26vvg/multinode_cluster_deployment_of_qwen_series/
false
false
self
4
{'enabled': False, 'images': [{'id': 'd3age5M-eenRm3-Om0rVIaZy55wZsf3H8AAXiu3vAlQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=108&crop=smart&auto=webp&s=6e13a75e2b9ba8f5329742d85c2c5ed93bb29bcb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=216&crop=smart&auto=webp&s=35d6575558fcc623dc583297c6706cca51d3fe15', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=320&crop=smart&auto=webp&s=cd65ebb11110853b0426b752dbbcdc8be236a18e', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=640&crop=smart&auto=webp&s=ce263a128e504263eeb00de624513b089421db5f', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=960&crop=smart&auto=webp&s=57980b07449bb9fa5422bdd9b7ede4d3da7e61d3', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?width=1080&crop=smart&auto=webp&s=82206e77361c698c706a27d5f8155a186c1fbd2e', 'width': 1080}], 'source': {'height': 1296, 'url': 'https://external-preview.redd.it/iaRl_OVFS73txV_NNDbFxeKO_1LqH8u8KefHeoel9uM.jpg?auto=webp&s=e4e6979220fc3da29407c5bf045ad471782491a5', 'width': 1728}, 'variants': {}}]}
LM Studio error for large models
1
[removed]
2025-04-18T15:08:15
https://www.reddit.com/r/LocalLLaMA/comments/1k277at/lm_studio_error_for_large_models/
Dazzling-Try-7499
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k277at
false
null
t3_1k277at
/r/LocalLLaMA/comments/1k277at/lm_studio_error_for_large_models/
false
false
self
1
null
I created an interactive tool to visualize *every* attention weight matrix within GPT-2!
252
2025-04-18T15:18:17
https://v.redd.it/dgo9qamv0mve1
tycho_brahes_nose_
v.redd.it
1970-01-01T00:00:00
0
{}
1k27fz2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dgo9qamv0mve1/DASHPlaylist.mpd?a=1747584481%2CN2MzNDE2MDZhYWUyMTBjNDY4MzA3NjE5OGYzN2E2ZTQyNjE5OTFjNjk3Y2QyODk3YjFjYTI3NDgwZGZmY2YyOA%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/dgo9qamv0mve1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/dgo9qamv0mve1/HLSPlaylist.m3u8?a=1747584481%2CMGZhYmEwNzJjM2ViODQwZjdhZWZjNTI0NDU5MDdmMDI1NThmNzcxM2E2NDE5YThlNjRmYmYwZWFhNmIyOTYyNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dgo9qamv0mve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1k27fz2
/r/LocalLLaMA/comments/1k27fz2/i_created_an_interactive_tool_to_visualize_every/
false
false
https://external-preview…12052227b1be560a
252
{'enabled': False, 'images': [{'id': 'YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=108&crop=smart&format=pjpg&auto=webp&s=199b275a949f0e92d669987a13fc4cc556875033', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=216&crop=smart&format=pjpg&auto=webp&s=7543a7f3aa1191bea128907544d635af9aa4fe2c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=320&crop=smart&format=pjpg&auto=webp&s=fa7571ee45adcf9c8c45e13a141f103d447abdaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=640&crop=smart&format=pjpg&auto=webp&s=1d4f107bd9adcda43534f52889dcb4ff20c46f12', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=960&crop=smart&format=pjpg&auto=webp&s=50d511f4a4b4076c4112f8767f9dc27a92f4fecf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=685ccb5b4d9b9fe080ac4b2f2a46408eaecbb446', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YW45M2FibXYwbXZlMWaepLM_4Oin4KjR_zAxiUwp5NOaLzCHkxa3urw0ZqL6.png?format=pjpg&auto=webp&s=1cb78bfc28d5c817b9e2b7ab5240b1dea75ef93a', 'width': 1920}, 'variants': {}}]}
What is your cheapest LocalLLAMA portable setup?
1
[removed]
2025-04-18T15:22:25
https://www.reddit.com/r/LocalLLaMA/comments/1k27jng/what_is_your_cheapest_localllama_portable_setup/
0xfleventy5
self.LocalLLaMA
2025-04-18T15:31:27
0
{}
1k27jng
false
null
t3_1k27jng
/r/LocalLLaMA/comments/1k27jng/what_is_your_cheapest_localllama_portable_setup/
false
false
self
1
null
Playing DOOM II and 19 other DOS/GB games with LLMs as a new benchmark
1
[removed]
2025-04-18T15:49:40
https://v.redd.it/boqzmm4y6mve1
Top_Courage_844
v.redd.it
1970-01-01T00:00:00
0
{}
1k286vj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/boqzmm4y6mve1/DASHPlaylist.mpd?a=1747584906%2COWVlOWI0YzFiMGY0ZDBiOWIyYWFjZjlkODBmNzQxYWY0ZmY0ODQwMGQ1MTFmNGYxNWVkMzkxMTEzZmU0NTM4Yw%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/boqzmm4y6mve1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/boqzmm4y6mve1/HLSPlaylist.m3u8?a=1747584906%2CMWFjYjg0NGExMWZhNWQ1ZmVkNDdlMWYyYmUyMjdlZmY5OTliY2VlODNiYzdmZDI0ZjEzODQwZjdjMjQ2OWFkNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/boqzmm4y6mve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1062}}
t3_1k286vj
/r/LocalLLaMA/comments/1k286vj/playing_doom_ii_and_19_other_dosgb_games_with/
false
false
https://external-preview…9e8c1131d183e1f1
1
{'enabled': False, 'images': [{'id': 'cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=108&crop=smart&format=pjpg&auto=webp&s=992b0e3d1d24a2ac29a7f010d88308223ee0b7c3', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=216&crop=smart&format=pjpg&auto=webp&s=f381a1c2a95ecf95b3799b74108cfaddabe1ebda', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=320&crop=smart&format=pjpg&auto=webp&s=30c189a612344751dfab6b55a2389587e7637bde', 'width': 320}, {'height': 433, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=640&crop=smart&format=pjpg&auto=webp&s=75aa378ebf2b542261ffb3e3af9b3433ac20a16c', 'width': 640}, {'height': 650, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=960&crop=smart&format=pjpg&auto=webp&s=ce54c144a0457ae030e1605ae9404eb80713bcfc', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dcbca8458681e5f20316d90123f9d2505d8fc309', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/cjNvZnVrNHk2bXZlMY3P1Exc5u-yEn5QEnlfLIhdgOY4HLTGRGU3oRQL_0lq.png?format=pjpg&auto=webp&s=a2204031b79e11429d0be2b504689ca7f825b698', 'width': 1180}, 'variants': {}}]}
RUN - Compare ChatGPT, DeepSeek, Gemini & 60+ Models For a $1
1
[removed]
2025-04-18T15:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1k28cia/run_compare_chatgpt_deepseek_gemini_60_models_for/
Buffalo_Emotional
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28cia
false
null
t3_1k28cia
/r/LocalLLaMA/comments/1k28cia/run_compare_chatgpt_deepseek_gemini_60_models_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jYq6kalyf0JnrtHalqazQvcmSFvbdfsZaNQ6AqAJCQ8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=108&crop=smart&auto=webp&s=7e4a7b36b4f75a17af8d3d9c90e5d32b820a8aa2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=216&crop=smart&auto=webp&s=ab6f33df681c28fe71b4f4d46ef6469ef28ebfb5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=320&crop=smart&auto=webp&s=b60a4d25e724d6b16e95d3626f117ae8398bfdd9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=640&crop=smart&auto=webp&s=f5159d4791b7abeb471bd9da11f4e4d8a88a6fca', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=960&crop=smart&auto=webp&s=0eee6e1d5c15ba21c1fb1a4f7e6b9bca9c89d9d7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?width=1080&crop=smart&auto=webp&s=228c39c58cb368426360267470acbf81451f084b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VdPdaXUcD8Q0zmiJPRdkTD9L00ECZeqAdsPMWBdsUlw.jpg?auto=webp&s=72296686a6191277739b17e3f180eae0ea98d853', 'width': 1200}, 'variants': {}}]}
I tried fine-tuning Qwen2.5 to generate git commit messages
21
Hi I recently tried fine-tuning Qwen2.5-Coder-3B-Instruct to generate better commit messages. The main goal is to let it understand the idea behind code changes instead of simply repeating them. Qwen2.5-Coder-3B-Instruct is a sweet model that is capable in coding tasks and lightweight to run. Then, I fine tune it on the dataset [Maxscha/commitbench](https://huggingface.co/datasets/Maxscha/commitbench). I think the results are honestly not bad. If the code changes focus on a main goal, the model can guess it pretty well. I released it as a python package and it is available now. You may check the fine tune script to see the training details as well. Hope you find them useful. You can use it by first installing `pip install git-gen-utils` and running `git-gen` 🔗Source: [https://github.com/CyrusCKF/git-gen](https://github.com/CyrusCKF/git-gen) 🤖Script: [https://github.com/CyrusCKF/git-gen/blob/main/finetune/finetune.ipynb](https://github.com/CyrusCKF/git-gen/blob/main/finetune/finetune.ipynb) 🤗Model (on HuggingFace): [https://huggingface.co/CyrusCheungkf/git-commit-3B](https://huggingface.co/CyrusCheungkf/git-commit-3B)
2025-04-18T15:56:26
https://www.reddit.com/r/LocalLLaMA/comments/1k28cqz/i_tried_finetuning_qwen25_to_generate_git_commit/
m19990328
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28cqz
false
null
t3_1k28cqz
/r/LocalLLaMA/comments/1k28cqz/i_tried_finetuning_qwen25_to_generate_git_commit/
false
false
self
21
{'enabled': False, 'images': [{'id': 'xy6I31nAkkoL8Ct3IvkIG0o5o6lAGtSqUep7-mqoHjI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=108&crop=smart&auto=webp&s=21a01be714d068616d5420f0c3260236e096005b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=216&crop=smart&auto=webp&s=07f4432d9ac7b143a81621fd7d406b5c6c6ba5a5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=320&crop=smart&auto=webp&s=122c7bd960bbf3e4d5623e70a53ecf84813a8600', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=640&crop=smart&auto=webp&s=613ff7deda28e7cf2a9f51c5d21978d338f7cdfe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=960&crop=smart&auto=webp&s=355dc331548f103574b20bdd52bbc96cde725054', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?width=1080&crop=smart&auto=webp&s=40cca150b01e580fb9bfc0ec6859dc345ed7cfa5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Slm4deth7Q2I29iTzhg7J_ZJ8adLRcnJfm_APLi_q2o.jpg?auto=webp&s=1f8e7207784a222a3af9406ded1d95cd89633695', 'width': 1200}, 'variants': {}}]}
Playing DOOM II and 19 other DOS/GB games with LLMs as a new benchmark
898
From AK (@akhaliq) "We introduce a research preview of VideoGameBench, a benchmark which challenges vision-language models to complete, in real-time, a suite of 20 different popular video games from both hand-held consoles and PC GPT-4o, Claude Sonnet 3.7, Gemini 2.5 Pro, and Gemini 2.0 Flash playing Doom II (default difficulty) on VideoGameBench-Lite with the same input prompt! Models achieve varying levels of success but none are able to pass even the first level." project page: [https://vgbench.com](https://vgbench.com) try on other games: [https://github.com/alexzhang13/VideoGameBench](https://github.com/alexzhang13/VideoGameBench)
2025-04-18T15:59:16
https://v.redd.it/u1i2op2o8mve1
ZhalexDev
v.redd.it
1970-01-01T00:00:00
0
{}
1k28f3f
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/u1i2op2o8mve1/DASHPlaylist.mpd?a=1747585072%2CMTc0NmNkMGQ5MTVmZGRjZTMxZmE4ZjEzMzg4M2MzM2EwZDYyYzE2Zjc2YTUzMzlkNTY2NDI0ZWRjZGEyMmE2MA%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/u1i2op2o8mve1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/u1i2op2o8mve1/HLSPlaylist.m3u8?a=1747585072%2COTE2MjJmZGNlYjhmMzM4ODlkZmZjODg3MmI5NGIwOTRiYjlkZGQwODRmZmIxM2M3MDY0NTk2OGNmN2MyNjQxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/u1i2op2o8mve1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1062}}
t3_1k28f3f
/r/LocalLLaMA/comments/1k28f3f/playing_doom_ii_and_19_other_dosgb_games_with/
false
false
https://external-preview…a8039d7d8c1b4e99
898
{'enabled': False, 'images': [{'id': 'd3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=108&crop=smart&format=pjpg&auto=webp&s=d4144cefefe046ae689420f48d3bd13856c3022c', 'width': 108}, {'height': 146, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=216&crop=smart&format=pjpg&auto=webp&s=a884ee00c9797aaac422a1a29440348e45ee55c0', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=320&crop=smart&format=pjpg&auto=webp&s=3473ca6cc67944bea65dcf559c0c53185d9b00a5', 'width': 320}, {'height': 433, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=640&crop=smart&format=pjpg&auto=webp&s=bbe06342a2a97732985538d5262e84484ed17cd9', 'width': 640}, {'height': 650, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=960&crop=smart&format=pjpg&auto=webp&s=400badfb931ad0368c69756a50b0808d1f167eda', 'width': 960}, {'height': 732, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?width=1080&crop=smart&format=pjpg&auto=webp&s=65482d4c29c833e13aa98fa4d3efabbd274c3c8d', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/d3J6N2xwMm84bXZlMeIZf5sR-oXFPwhpDTHMtN-Je-w0GMxJeu96UcIYpm6F.png?format=pjpg&auto=webp&s=650cf8210f792538745698e02b8afba1695e867b', 'width': 1180}, 'variants': {}}]}
I wish there were a latex and/or tikz benchmark
1
I don't even have a GPU so there is no chance I could run the tests. But if anyone is interested, I could maybe help ask the questions.
2025-04-18T15:59:55
https://www.reddit.com/r/LocalLLaMA/comments/1k28fnl/i_wish_there_were_a_latex_andor_tikz_benchmark/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28fnl
false
null
t3_1k28fnl
/r/LocalLLaMA/comments/1k28fnl/i_wish_there_were_a_latex_andor_tikz_benchmark/
false
false
self
1
null
Llama 4 Maverick MLX performance on M3 Ultra
29
LM studio released an MLX update today so we can run Maverick in MLX format. Q4 version numbers: Prompt size: 12405 Prompt eval rate: 332 t/s Token gen rate: 47.42 Right now for me there is a bug where it's not using prompt caching. Promising initial results though.
2025-04-18T16:03:30
https://www.reddit.com/r/LocalLLaMA/comments/1k28j02/llama_4_maverick_mlx_performance_on_m3_ultra/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28j02
false
null
t3_1k28j02
/r/LocalLLaMA/comments/1k28j02/llama_4_maverick_mlx_performance_on_m3_ultra/
false
false
self
29
null
Time to step up the /local reasoning game
327
Latest OAI models tucked away behind intrusive "ID verification"....
2025-04-18T16:17:11
https://i.redd.it/wtibm8c3cmve1.jpeg
vornamemitd
i.redd.it
1970-01-01T00:00:00
0
{}
1k28ulo
false
null
t3_1k28ulo
/r/LocalLLaMA/comments/1k28ulo/time_to_step_up_the_local_reasoning_game/
false
false
https://b.thumbs.redditm…lMRhfUNU9eZg.jpg
327
{'enabled': True, 'images': [{'id': 'G2W4Bxj2_GLR0tp5DlTpjyCug_FvDl74xFGlre16JEA', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=108&crop=smart&auto=webp&s=5feb2724acfab316ff5a490a6031f61c2b3ee110', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=216&crop=smart&auto=webp&s=153f4d64e647ddd0589d29ce571a795d8dc22b9d', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=320&crop=smart&auto=webp&s=e5d0bdad113d36d5c64c0bdb2df8b1e4215646dd', 'width': 320}, {'height': 382, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=640&crop=smart&auto=webp&s=93f80a0bad3e3f79619d29663e49d519eaa7898d', 'width': 640}, {'height': 574, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=960&crop=smart&auto=webp&s=5d07a1b7958059279032a275ed0d7168d9da0faf', 'width': 960}, {'height': 646, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?width=1080&crop=smart&auto=webp&s=b34018c078a4e1b303a6338fe64c6b947fb25090', 'width': 1080}], 'source': {'height': 718, 'url': 'https://preview.redd.it/wtibm8c3cmve1.jpeg?auto=webp&s=be9362f02c4338ab26a4f83156119c3de345395a', 'width': 1200}, 'variants': {}}]}
Docker desktop now supports model running
0
Didn't see a post here yet... Anyone try it yet? Thoughts? https://www.docker.com/blog/introducing-docker-model-runner/
2025-04-18T16:21:19
https://www.reddit.com/r/LocalLLaMA/comments/1k28y2j/docker_desktop_now_supports_model_running/
onemoreburrito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28y2j
false
null
t3_1k28y2j
/r/LocalLLaMA/comments/1k28y2j/docker_desktop_now_supports_model_running/
false
false
self
0
{'enabled': False, 'images': [{'id': '5YurRdfkQeIxTtv_1yZqhwCibgtikSBDEaMPw2UzacA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?width=108&crop=smart&auto=webp&s=0f8b485a55e05dff7858656d6ba29a819f1a1fb1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?width=216&crop=smart&auto=webp&s=3af5ef04002ac63ee54e91eecec3e7530b47e145', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?width=320&crop=smart&auto=webp&s=da4a6dc73322d752eb9ee5abefddc500ca1181a8', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?width=640&crop=smart&auto=webp&s=001f2b11c0e4f2b2b8e0650342013fc0349a649a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?width=960&crop=smart&auto=webp&s=605af0ce456aecc217b7f2c42c158e04f56cce23', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Gz5iv0hC1-oojGEzY0yl1Njb0Jt7-bSeQ06GaO9A-dI.jpg?auto=webp&s=93145b74d2c28e71d64929f9373613f99f937937', 'width': 1024}, 'variants': {}}]}
Title: Evolution of a Build – ThoughtPenAI’s Super Intelligence Pathway
1
[removed]
2025-04-18T16:21:49
https://www.reddit.com/r/LocalLLaMA/comments/1k28yh0/title_evolution_of_a_build_thoughtpenais_super/
ArchitectExecutor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k28yh0
false
null
t3_1k28yh0
/r/LocalLLaMA/comments/1k28yh0/title_evolution_of_a_build_thoughtpenais_super/
false
false
self
1
null
Simple tikz test . chatgpt o1 passes but grok/gemini 2.5 fail
0
Prompt: Give me tikz code to draw a heptagon with 20 balls in it. The balls should have different colors and the numbers 1 to 20 written inside them. You should give a full LaTeX document. chatgpt o1's answer renders as: https://preview.redd.it/w2u38zkeemve1.png?width=270&format=png&auto=webp&s=d6d784c86455a17d415bc3a37be0fcd471c4608b What does your favorite local model give?
2025-04-18T16:30:10
https://www.reddit.com/r/LocalLLaMA/comments/1k295ir/simple_tikz_test_chatgpt_o1_passes_but_grokgemini/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k295ir
false
null
t3_1k295ir
/r/LocalLLaMA/comments/1k295ir/simple_tikz_test_chatgpt_o1_passes_but_grokgemini/
false
false
https://b.thumbs.redditm…c6d1E9Bjy5YA.jpg
0
null
How to add tokens/sec in Enchanted LLM?
1
[removed]
2025-04-18T16:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1k299ad/how_to_add_tokenssec_in_enchanted_llm/
mykeschn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k299ad
false
null
t3_1k299ad
/r/LocalLLaMA/comments/1k299ad/how_to_add_tokenssec_in_enchanted_llm/
false
false
self
1
null
QAT is slowly becoming mainstream now?
192
Google just released a QAT optimized Gemma 3 - 27 billion parameter model. The quantization aware training claims to recover close to 97% of the accuracy loss that happens during the quantization. Do you think this is slowly becoming the norm? Will non-quantized safetensors slowly become obsolete?
2025-04-18T16:52:07
https://www.reddit.com/r/LocalLLaMA/comments/1k29oe2/qat_is_slowly_becoming_mainstream_now/
__amberluz__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k29oe2
false
null
t3_1k29oe2
/r/LocalLLaMA/comments/1k29oe2/qat_is_slowly_becoming_mainstream_now/
false
false
self
192
null
Best Local LLM per 3090 card
1
[removed]
2025-04-18T17:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1k2a3tz/best_local_llm_per_3090_card/
PTCAH
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1k2a3tz
false
null
t3_1k2a3tz
/r/LocalLLaMA/comments/1k2a3tz/best_local_llm_per_3090_card/
false
false
self
1
null