title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
โ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
โ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
โ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
MacBook M3, 24GB ram. What's best for LLM engine?
| 15 |
Like in title. I am in process of moving from windows laptop to MacBook Air M3, 24GB ram. I use it for local development in vscode and need to connect to local LLM. I've installed Ollama and it works but of course it's slower than my 3080ti16GB in windows laptop. It's not real problem because for my purpose I can leave laptop for hours to see result (that's the main reason for transition because windows laptop crash after an hour or so and worked loudly like steam engine). My question is if Ollama is fist class citizen in Apple or there's much better solution. I dont do any bleeding edge thing and use standard models like llama, Gemma, deepseek for my purpose. I used to Ollama and use it in such manner that all my projects connect to Ollama server on localhost. I know about LMstudio but didn't use it a lot as Ollama was sufficient. So, is Ollama ok or there much faster solutions, like 30% faster or more? Or there's a special configuration for Ollama in Apple beside installing it actually?
| 2025-03-30T12:12:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnb3cl/macbook_m3_24gb_ram_whats_best_for_llm_engine/
|
Familyinalicante
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnb3cl
| false | null |
t3_1jnb3cl
|
/r/LocalLLaMA/comments/1jnb3cl/macbook_m3_24gb_ram_whats_best_for_llm_engine/
| false | false |
self
| 15 | null |
Help, Resend thinking prompt or discard it from chat memory, QwQ
| 2 |
So I built a fullstack chat platform for my company. I could just use Qwen 2.5 32B AWQ and have it a day. Butttt my team wants to implement a thinking model.
The problem? Thinking messages eat up a ton of context window and chat history DB. Iโm using Postgre for storage (I can reimplement it in Mongo or Elastic, not a big deal, I made it a pluggable backend).
The real issue is the context window. Should I resend the entire thinking message every time, or just the end result, like any SFT model?
| 2025-03-30T12:18:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnb71o/help_resend_thinking_prompt_or_discard_it_from/
|
Altruistic_Heat_9531
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnb71o
| false | null |
t3_1jnb71o
|
/r/LocalLLaMA/comments/1jnb71o/help_resend_thinking_prompt_or_discard_it_from/
| false | false |
self
| 2 | null |
What models would you recommend for android phones?
| 1 |
[removed]
| 2025-03-30T12:27:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnbbyd/what_models_would_you_recommend_for_android_phones/
|
Present_Plantain_163
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnbbyd
| false | null |
t3_1jnbbyd
|
/r/LocalLLaMA/comments/1jnbbyd/what_models_would_you_recommend_for_android_phones/
| false | false |
self
| 1 | null |
Grok Deep Search (Local)
| 2 |
I was really impressed with how well Grokโs deep search works for reading and searching. I was wondering if it's possible to replicate something similar using local models or tools.
Has anyone tried this? Would love to hear your thoughts!
Thanks!
| 2025-03-30T12:29:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnbd4v/grok_deep_search_local/
|
Asleep_Aerie_4591
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnbd4v
| false | null |
t3_1jnbd4v
|
/r/LocalLLaMA/comments/1jnbd4v/grok_deep_search_local/
| false | false |
self
| 2 | null |
I think I found llama 4 - the "cybele" model on lmarena. It's very, very good and revealed it name โบ๏ธ
| 122 |
Have you had similar experience with this model?
| 2025-03-30T12:36:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnbhdl/i_think_i_found_llama_4_the_cybele_model_on/
|
Salty-Garage7777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnbhdl
| false | null |
t3_1jnbhdl
|
/r/LocalLLaMA/comments/1jnbhdl/i_think_i_found_llama_4_the_cybele_model_on/
| false | false |
self
| 122 | null |
Request - manus ai invite code
| 0 |
Yo guys! Since there is less than 1% chance of approved acces, could you please dm me an invite code to manus ai?
Thanks !
| 2025-03-30T12:39:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnbj5x/request_manus_ai_invite_code/
|
gfy_expert
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnbj5x
| false | null |
t3_1jnbj5x
|
/r/LocalLLaMA/comments/1jnbj5x/request_manus_ai_invite_code/
| false | false |
self
| 0 | null |
How do I fine tune an LLM that mimics Reddit comments and isn't too 'AI-generated'?
| 1 |
[removed]
| 2025-03-30T13:18:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnc80q/how_do_i_fine_tune_an_llm_that_mimics_reddit/
|
xkcd690
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnc80q
| false | null |
t3_1jnc80q
|
/r/LocalLLaMA/comments/1jnc80q/how_do_i_fine_tune_an_llm_that_mimics_reddit/
| false | false |
self
| 1 | null |
It's not much, but its honest work! 4xRTX 3060 running 70b at 4x4x4x4x
| 186 | 2025-03-30T13:21:39 |
https://www.reddit.com/gallery/1jnc9rd
|
madaerodog
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnc9rd
| false | null |
t3_1jnc9rd
|
/r/LocalLLaMA/comments/1jnc9rd/its_not_much_but_its_honest_work_4xrtx_3060/
| false | false | 186 | null |
||
Any alternatives to the new 4o Multi-Modal Image capabilities?
| 12 |
The new 4o native image capabilities are quite impressing. Are there any open alternatives which allow similar native image input and output?
| 2025-03-30T13:34:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jncig2/any_alternatives_to_the_new_4o_multimodal_image/
|
janusr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jncig2
| false | null |
t3_1jncig2
|
/r/LocalLLaMA/comments/1jncig2/any_alternatives_to_the_new_4o_multimodal_image/
| false | false |
self
| 12 | null |
LLMs over torrent
| 265 |
Hey r/LocalLLaMA,
Just messing around with an idea - serving LLM models over torrent. Iโve uploaded Qwen2.5-VL-3B-Instruct to a seedbox sitting in a neutral datacenter in the Netherlands (hosted via Feralhosting).
If you wanna try it out, grab the torrent file here and load it up in any torrent client:
๐ http://sbnb.astraeus.feralhosting.com/Qwen2.5-VL-3B-Instruct.torrent
This is just an experiment - no promises about uptime, speed, or anything really. It might work, it might not ๐คท
โธป
Some random thoughts / open questions:
1. Only models with redistribution-friendly licenses (like Apache-2.0) can be shared this way. Qwen is cool, Mistral too. Stuff from Meta or Google gets more legally fuzzy - might need a lawyer to be sure.
2. If we actually wanted to host a big chunk of available models, weโd need a ton of seedboxes. Huggingface claims they store 45PB of data ๐
๐ https://huggingface.co/docs/hub/storage-backends
3. Binary deduplication would help save space. Bonus points if we can do OTA-style patch updates to avoid re-downloading full models every time.
4. Why bother? AIโs getting more important, and putting everything in one place feels a bit risky long term. Torrents could be a good backup layer or alt-distribution method.
โธป
Anyway, curious what people think. If youโve got ideas, feedback, or even some storage/bandwidth to spare, feel free to join the fun. Letโs see what breaks ๐
| 2025-03-30T14:08:12 |
aospan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnd6px
| false | null |
t3_1jnd6px
|
/r/LocalLLaMA/comments/1jnd6px/llms_over_torrent/
| false | false | 265 |
{'enabled': True, 'images': [{'id': 'lQbhGKpXp7BKz73NyJAsURD7Cq617R2OYlic1HfdHqI', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=108&crop=smart&auto=webp&s=a12825d54635fc18910f031eb9528cfcc26bc80e', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=216&crop=smart&auto=webp&s=87a6c8601094e11b3777b5f7273c2d08254da77a', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=320&crop=smart&auto=webp&s=9333db4b8c633b04fd5438a33030a71910bb8242', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=640&crop=smart&auto=webp&s=ade8fa1e4ff10e2d71461fdb60f942583a4d442f', 'width': 640}, {'height': 715, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=960&crop=smart&auto=webp&s=00d9afb8657d090af62a196502c3e709653e7eee', 'width': 960}, {'height': 804, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?width=1080&crop=smart&auto=webp&s=f4834b4fe7b8b52b57ee16cf0c2841cab6240c48', 'width': 1080}], 'source': {'height': 1228, 'url': 'https://preview.redd.it/8z6t2vvu3ure1.jpeg?auto=webp&s=98cf173bedf8839880ef001b4a8f100d267265af', 'width': 1648}, 'variants': {}}]}
|
||
Interview Assignment for AI Engineers
| 1 |
[removed]
| 2025-03-30T14:22:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndhq5/interview_assignment_for_ai_engineers/
|
Prestigious-Sea1470
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndhq5
| false | null |
t3_1jndhq5
|
/r/LocalLLaMA/comments/1jndhq5/interview_assignment_for_ai_engineers/
| false | false |
self
| 1 | null |
What is your interview Assignment for AI Engineers ?
| 0 |
Hi Folks,
Ihow do you evaluate AI (or ML) engineers these days? What kind of assignments or exercises do you use to assess their skills?
Iโm specifically looking for engineers who can build AI agents using โLLMs, multi-agent frameworks, LLM observability tools, evals, and so on. Not really looking for folks focused on model training or deployment.
| 2025-03-30T14:27:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndl16/what_is_your_interview_assignment_for_ai_engineers/
|
Sarcinismo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndl16
| false | null |
t3_1jndl16
|
/r/LocalLLaMA/comments/1jndl16/what_is_your_interview_assignment_for_ai_engineers/
| false | false |
self
| 0 | null |
Assessing facial recognition performance of vision-LLMs
| 1 |
I thought it'd be interesting to assess face recognition performance of vision LLMs. Even though it wouldn't be wise to use a vision LLM to do face rec when there are dedicated models, I'll note that:
- it gives us a way to measure the gap between dedicated vision models and LLM approaches, to assess how close we are to 'vision is solved'.
- lots of jurisdictions have regulations around face rec system, so it is important to know if vision LLMs are becoming capable face rec systems.
I measured performance of multiple models on multiple datasets (AgeDB30, LFW, CFP). As a baseline, I used arface-resnet-100. Note that as there are 24,000 pair of images, I did not benchmark the more costly commercial APIs:
https://github.com/yhenon/llm-face-vision/blob/main/assets/recognition_metrics.png?raw=true
Summary:
- Most vision LLMs are very far from even a several year old resnet-100.
- All models perform better than random chance.
- The google models (Gemini, Gemma) perform best.
| 2025-03-30T14:31:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndo1q/assessing_facial_recognition_performance_of/
|
jordo45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndo1q
| false | null |
t3_1jndo1q
|
/r/LocalLLaMA/comments/1jndo1q/assessing_facial_recognition_performance_of/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RF8Tfxx9I1YapyIh3Of4dvnCBlUhC2DE7svp5KeBaEE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=108&crop=smart&auto=webp&s=734021417bb4bdc3609dc132c2303cdf15661051', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=216&crop=smart&auto=webp&s=400da0678f8133901ee9fd6fc13ac3867ea867dc', 'width': 216}, {'height': 214, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=320&crop=smart&auto=webp&s=45e20297a6ba22bff8f8423798c94294904c8ec5', 'width': 320}, {'height': 429, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=640&crop=smart&auto=webp&s=417081a7f9eb0355fef7c4cf790a79fada1866db', 'width': 640}, {'height': 644, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=960&crop=smart&auto=webp&s=2396ce15b7d110a81326bc2fdf724780e7d891c9', 'width': 960}, {'height': 724, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?width=1080&crop=smart&auto=webp&s=938d957c187559f8525ece1fd8dfda89de1c906b', 'width': 1080}], 'source': {'height': 3599, 'url': 'https://external-preview.redd.it/C3HlqxGv4Sg46gh2qBd1-vugZXu0vfbaj0EnrfQz1yQ.png?auto=webp&s=6c8584c145421fb21932f217d776254397c05235', 'width': 5363}, 'variants': {}}]}
|
Assessing facial recognition performance of vision-LLMs
| 2 |
I thought it'd be interesting to assess face recognition performance of vision LLMs. Even though it wouldn't be wise to use a vision LLM to do face rec when there are dedicated models, I'll note that:
\- it gives us a way to measure the gap between dedicated vision models and LLM approaches, to assess how close we are to 'vision is solved'.
\- lots of jurisdictions have regulations around face rec system, so it is important to know if vision LLMs are becoming capable face rec systems.
I measured performance of multiple models on multiple datasets (AgeDB30, LFW, CFP). As a baseline, I used arface-resnet-100. Note that as there are 24,000 pair of images, I did not benchmark the more costly commercial APIs:
*Processing img j1h3buoc8ure1...*
**Samples**
https://preview.redd.it/6klg1g7j8ure1.png?width=1275&format=png&auto=webp&s=07591185d4535c67c2bdbe0129bd6dcd97b8079b
**Summary**:
\- Most vision LLMs are very far from even a several year old resnet-100.
\- All models perform better than random chance.
\- The google models (Gemini, Gemma) perform best.
| 2025-03-30T14:34:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndqwo/assessing_facial_recognition_performance_of/
|
jordo45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndqwo
| false | null |
t3_1jndqwo
|
/r/LocalLLaMA/comments/1jndqwo/assessing_facial_recognition_performance_of/
| false | false |
self
| 2 | null |
How do you integrate your LLM machine into the rest of your Homelab? Does it make sense to connect your LLM server to Kubernetes?
| 6 |
I was wondering if it does make sense to connect your LLM server to the rest of your homelab/kubernetes cluster and i am curious about how everyone here does it.
Do you run an hypervisor like proxmox or just an baremetal OS to dedicate the entire performance just to the LLM?
If you've got just one dedicated machine just for your LLM server, does the scheduling/orchestration part of Kubernetes actually provide any benefit? There is nowhere for the LLM server to reschedule and running directly on teh OS seems simpler.
For those of you using Kubernetes, I'm assuming you create taints to keep other apps from scheduling on your LLM node and potentially impacting performance, right?
Would Kubernetes still make sense just for easier integration into the already existing logging and monitoring stack, maybe ingress for the LLM API etc.?
How are you all handling this in your homelab?
| 2025-03-30T14:35:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndrns/how_do_you_integrate_your_llm_machine_into_the/
|
Deep_Area_3790
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndrns
| false | null |
t3_1jndrns
|
/r/LocalLLaMA/comments/1jndrns/how_do_you_integrate_your_llm_machine_into_the/
| false | false |
self
| 6 | null |
We built a website where you can vote on Minecraft structures generated by AI
| 19 | 2025-03-30T14:37:03 |
http://mcbench.ai
|
civilunhinged
|
mcbench.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndsj5
| false | null |
t3_1jndsj5
|
/r/LocalLLaMA/comments/1jndsj5/we_built_a_website_where_you_can_vote_on/
| false | false |
default
| 19 | null |
|
Assessing facial recognition performance of vision-LLMs
| 2 |
I thought it'd be interesting to assess face recognition performance of vision LLMs. Even though it wouldn't be wise to use a vision LLM to do face rec when there are dedicated models, I'll note that:
\- it gives us a way to measure the gap between dedicated vision models and LLM approaches, to assess how close we are to 'vision is solved'.
\- lots of jurisdictions have regulations around face rec system, so it is important to know if vision LLMs are becoming capable face rec systems.
I measured performance of multiple models on multiple datasets (AgeDB30, LFW, CFP). As a baseline, I used arface-resnet-100. Note that as there are 24,000 pair of images, I did not benchmark the more costly commercial APIs:
https://preview.redd.it/8w6p0c239ure1.png?width=5363&format=png&auto=webp&s=78b50ba32c2534ee297306c696748818549047da
**Samples**
https://preview.redd.it/lpab59z59ure1.png?width=1275&format=png&auto=webp&s=d4edb554c4b0f17c415cdb304dd48adee4b3cd28
**Summary**
\- Most vision LLMs are very far from even a several year old resnet-100.
\- All models perform better than random chance.
\- The google models (Gemini, Gemma) perform best.
[Repo here](https://github.com/yhenon/llm-face-vision)
| 2025-03-30T14:38:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jndtf9/assessing_facial_recognition_performance_of/
|
jordo45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jndtf9
| false | null |
t3_1jndtf9
|
/r/LocalLLaMA/comments/1jndtf9/assessing_facial_recognition_performance_of/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'euK6ZVbbeIF1pEUYA0v6zDVkkw8KaNU3MdIx9pk-PtI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=108&crop=smart&auto=webp&s=7b354964f4695c0da2ce619c800c07e3189d320c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=216&crop=smart&auto=webp&s=313cca393429713fca76df27793ea217580b6750', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=320&crop=smart&auto=webp&s=08a279b4c14e5d639acc90db87c9c22ae4a505d8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=640&crop=smart&auto=webp&s=98656574f88b317c643d15328db5d2353fbf6309', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=960&crop=smart&auto=webp&s=40dbfdb916a5426c8cbdc79d47c83db609e24718', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?width=1080&crop=smart&auto=webp&s=ed2f02bf24f626249758b9dd4e1e5250070390c3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q9B_F_rENth3JFM_pC3cxUL1Sc3Cm_CC0V2ft3_ILkE.jpg?auto=webp&s=cc993c23ddcd7332c3506c8d330564999ee1bf91', 'width': 1200}, 'variants': {}}]}
|
|
Itโs gotta be coming soon, right?
| 0 | 2025-03-30T14:39:57 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnduql
| false | null |
t3_1jnduql
|
/r/LocalLLaMA/comments/1jnduql/its_gotta_be_coming_soon_right/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'iOZipTOEbN73-rs05v6PxO50WpFJ6sPUh99NNRrxrgE', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=108&crop=smart&auto=webp&s=b79c9031a739db5557d452645306139313ca8048', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=216&crop=smart&auto=webp&s=ea73fbc912ee0853f21d316b9d2e09c87867a4ad', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=320&crop=smart&auto=webp&s=2184db19110b33c1cfcd89191e4b7570cb194510', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=640&crop=smart&auto=webp&s=daf19739eac67635c71d3e65daa5aec77aab6614', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=960&crop=smart&auto=webp&s=a6c4fb36ae7941263a9f338b3ea7113fb1c3882e', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?width=1080&crop=smart&auto=webp&s=bd06d81b74e8ed89915d3b1e58d1382ede9500e6', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/wtq45lqi9ure1.jpeg?auto=webp&s=40ea1a1a43dca6a41f261b6206a6ef56579f59bc', 'width': 1536}, 'variants': {}}]}
|
|||
Low profile cpu cooler?
| 1 |
I got an open frame to have more space between GPUs. I got the [Veddha T3 6-GPU](https://www.ebay.com/itm/405431181041)
Unfortunately, my current CPU cooler (Dark Rock Pro 4) does not fit between the mobo level and "gpu tray" so I need to get a lower profile CPU cooler.
I am debating between a low profile air cooler and watercooling. A smaller air cooler should fit but then I am afraid the PCIe extenders might be too short to go around the cooler or will be too bended. On the other hand, a water cooler would use minimal vertical space but then I need to find a place for the tubes and radiator which I don't like and also I generally don't love AIO reliability/durability.
What kind of cooler should I get or avoid?
My CPU is a ryzen 7950X.
| 2025-03-30T14:55:05 |
https://www.reddit.com/gallery/1jne5xo
|
Ok-Anxiety8313
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jne5xo
| false | null |
t3_1jne5xo
|
/r/LocalLLaMA/comments/1jne5xo/low_profile_cpu_cooler/
| false | false | 1 | null |
|
3 new Llama models inside LMArena (maybe LLama 4?)
| 114 | 2025-03-30T15:08:49 |
https://www.reddit.com/gallery/1jnegrp
|
Straight-Worker-4327
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnegrp
| false | null |
t3_1jnegrp
|
/r/LocalLLaMA/comments/1jnegrp/3_new_llama_models_inside_lmarena_maybe_llama_4/
| false | false | 114 | null |
||
When you prompt a non-thinking model to think, does it actually improve output?
| 39 |
For instance, Mistral 3 24b is not a reasoning model. However, when prompted correctly, I can have it generate <think></think> tags, and iteratively think through the problem.
In practice, I can get it to answer the "strawberry" test more often correctly, but I'm not sure if it's just due to actually thinking through the problem, or just because I asked it to **think harder** that it just improves the chance of being correct.
Is this just mimicking reasoning, or actually helpful?
| 2025-03-30T15:10:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnehr4/when_you_prompt_a_nonthinking_model_to_think_does/
|
Kep0a
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnehr4
| false | null |
t3_1jnehr4
|
/r/LocalLLaMA/comments/1jnehr4/when_you_prompt_a_nonthinking_model_to_think_does/
| false | false |
self
| 39 | null |
What is deep research to you?
| 4 |
I'm updating an old framework I have to seamlessly perform a simple online search in duckduckgo search (if the user activates that feature), retrieving the text results from the results only, but it only yields an overview of the text contents of the page, which is ok for quick search since the results are returned immediately.
The system recognizes complex inquiries intuitively and if the user requests a deep search, it proceeds to perform a systematic, agentic search online from the results, yielding 10 results, rather than simply parsing the overview text. I'm trying to get more ideas as to how to actually incorporate and expand deep search functionality to take a more broad, systematic, agentic approach. Here is what I have so far:
1 - Activate Deep Search when prompted, generating a query related to the user's inquiry, using the convo history as additional context.
2 - For each search result: check if the website respects robots.txt and if the text overview is related to the user's inquiry and if so, scrape the text inside webpage.
3 - If the webpage contains links, use the user's inquiry, convo history and the scraped text from the page itself (summarizing the text contents from context length-long chunks if the text is greater than the context length before achieving a final summary) to ask a list of questions related to the user's inquiry and the info gathered so far.
4 - After generating the list of questions, a list of links inside the search result is sent to the agent to see if any of the links may be related to the user's inquiry and the list of questions. If any link is detected as relevant, the agent selects that link and recursively performs step 2, but for links instead of search results. Keep in mind this is all done inside the same search result. If none of the links presented are related or there is an issue accessing the link, the agent stops digging and moves on to the next search result.
Once all of that is done, the agent will summarize each chunk of text gathered related to each search result, then provide a final summary before providing an answer to the user.
This actually works surprisingly well and is stable enough to keep going and gathering tons of accurate information. So once I deal with a number of issues (convo history chunking, handling pdf links, etc.) I want to expand the scope of the deep search further to reach even deeper conclusions. Here are some ideas:
1 - Scrape youtube videos - duckduckgo\_search allows you to return youtube videos. I already have methods set up to perform the search and auto-download batches of youtube videos based on the search results and converting them to mp4. This is done with duckduckgo\_search, yt-dlp and ffmpeg. All I would need to do afterwards is to break up the audio into 30-second temp audio clips and use local whisper to transcribe the audio and use the deep search agent to chunk/summarize them and include the information as part of the inquiry.
2 - That's it. Lmao.
If you read this far, you're probably thinking to yourself that this would take forever, and honestly, yes it does take a long time to generate an answer but when it does, it really does generate a goldmine of information that the agent worked so hard to gather, so my version of Deep Search is built for the patient in mind, who really need a lot of information or need to make sure you have incredibly precise information and are willing to wait for results.
I think its interesting to see the effects of scraping youtube videos alongside search results. I tried scraping related images from the links inside the search results but the agent kept correctly discarding the images as irrelevant, which means there usually isn't much valuable info to gather with images themselves.
That being said, I feel like even here I'm not doing enough to provide a satisfactory deep search. I feel like there should be additional functionality included (like RAG, etc.) and I'm personally not satisfied with this approach, even if it does yield valuable information.
So that begs the question: what is your interpretation of deep search and how would you approach it differently?
TL;DR: I have a bot with two versions of search: Shallow search for quick search results, and deep search, for in-depth, systematic, agentic approach to data gathering. Deep search may not be enough to really consider it "deep".
| 2025-03-30T15:21:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jner1g/what_is_deep_research_to_you/
|
swagonflyyyy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jner1g
| false | null |
t3_1jner1g
|
/r/LocalLLaMA/comments/1jner1g/what_is_deep_research_to_you/
| false | false |
self
| 4 | null |
Leaked Prompts
| 1 |
[removed]
| 2025-03-30T15:24:55 |
https://github.com/amirh-far/promptMaster
|
Amirh_far
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnetc3
| false | null |
t3_1jnetc3
|
/r/LocalLLaMA/comments/1jnetc3/leaked_prompts/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'vRtsZHKeabra-DJmLn2fc_yrC6nwbxRmTXTmQtdUDPE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=108&crop=smart&auto=webp&s=2eb0daae16b804636f1ca708cd2f88e992aef1be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=216&crop=smart&auto=webp&s=8d211ba66831b8fc1352d19cb10c8a87f633ba47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=320&crop=smart&auto=webp&s=2c81a21a19a296e87530d050d48e00f4d71698e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=640&crop=smart&auto=webp&s=3ad3be3b733d6add46e5f2fe7483b87235672a57', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=960&crop=smart&auto=webp&s=7a8b024a62d4075c2021a690bddd4adcea18cab0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?width=1080&crop=smart&auto=webp&s=e2eebd6e1d39d04ba935ff2d23b84b325435ff50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RgTDCo8JVxs7gEKsek6bQmnGNYpTanmOfdq09bWaUFM.jpg?auto=webp&s=09f848c060a9abe407a9d5ff5e985e616c1d7949', 'width': 1200}, 'variants': {}}]}
|
|
Leaked Prompts
| 1 |
[removed]
| 2025-03-30T15:26:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jneuyu/leaked_prompts/
|
Amirh_far
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jneuyu
| false | null |
t3_1jneuyu
|
/r/LocalLLaMA/comments/1jneuyu/leaked_prompts/
| false | false |
self
| 1 | null |
Has anyone tried Tarsier2 7B? Insanely impressive video language model
| 27 |
https://huggingface.co/spaces/omni-research/Tarsier2-7b
This one snuck under the radar on me, but from playing around with the demo and looking at the evals, it's honestly really good. I'm quite surprised at the performance for a 7B model.
I just wish there was an MLX or GGUF version. If anyone finds one, please share.
| 2025-03-30T15:28:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnewf8/has_anyone_tried_tarsier2_7b_insanely_impressive/
|
dontreachyoungblud
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnewf8
| false | null |
t3_1jnewf8
|
/r/LocalLLaMA/comments/1jnewf8/has_anyone_tried_tarsier2_7b_insanely_impressive/
| false | false |
self
| 27 |
{'enabled': False, 'images': [{'id': 'Hoag52XEwdxJcL7bfekmo7S3_tEJrx6fAmgA-z2iZho', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=108&crop=smart&auto=webp&s=0391fa25dac2855c81aae169b549ccc8550e0497', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=216&crop=smart&auto=webp&s=c7b914155e94804753478368a93b69bcad4fa861', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=320&crop=smart&auto=webp&s=20fb1273ce6fb2fab4c9a21a513bef363aef1713', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=640&crop=smart&auto=webp&s=9f3817b3a9c223fee942a49e80cbbf8a9293dee3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=960&crop=smart&auto=webp&s=c1559b4282d95a8fd7ea6e86ff7aaff96148930c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?width=1080&crop=smart&auto=webp&s=bf97b5c7b55b21698557a6756c9b8bb310333933', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0XDSBSbx3-YGfMONPeohkCw8h4gFfSqt79N0h2Zozis.jpg?auto=webp&s=17097263f3872c3dfc39b09b9947047592adee63', 'width': 1200}, 'variants': {}}]}
|
JavaScript devs, who is interested in ai agents from scratch?
| 1 |
[removed]
| 2025-03-30T15:30:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jney1u/javascript_devs_who_is_interested_in_ai_agents/
|
purellmagents
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jney1u
| false | null |
t3_1jney1u
|
/r/LocalLLaMA/comments/1jney1u/javascript_devs_who_is_interested_in_ai_agents/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'rgJrEe4WxEufQ5oS6GQfhEyUGlJhgK7yrjAYzfkuoX4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=108&crop=smart&auto=webp&s=2e55e9a97392e4ff1fd2973f4ae0b62ed389823e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=216&crop=smart&auto=webp&s=6074fd78d19281d6cd156f3c71926a82b1756fa5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=320&crop=smart&auto=webp&s=0bd12e07eee7c73fa70d68f2fa686e18d64e0f0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=640&crop=smart&auto=webp&s=ceacb784630ed235b839f8df20dd3d3c7de8e1c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=960&crop=smart&auto=webp&s=71984ddf60b5a69717dbffd4ee13b1808b8045bc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?width=1080&crop=smart&auto=webp&s=379b4e1913721d48378e18c6d48df901b0950926', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/284BsIL2i1C4iaawLP59InWWfajndbOWWo1hEAf0h_w.jpg?auto=webp&s=3083c7e5a6f8c7b7cf54d15ba90ddb88525a3d4e', 'width': 1200}, 'variants': {}}]}
|
Exploiting Large Language Models: Backdoor Injections
| 30 | 2025-03-30T15:36:01 |
https://kruyt.org/llminjectbackdoor/
|
phantagom
|
kruyt.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnf28i
| false | null |
t3_1jnf28i
|
/r/LocalLLaMA/comments/1jnf28i/exploiting_large_language_models_backdoor/
| false | false | 30 |
{'enabled': False, 'images': [{'id': 'jEUhvA2Uzu666ve36ZJe4efAV_CiGqDezQbdl0zRdzk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=108&crop=smart&auto=webp&s=9b5cd066c8d5ba741670bbb1366901de337a7ab6', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=216&crop=smart&auto=webp&s=d811f782275a7d3991cd88716ccbf02a9844421e', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=320&crop=smart&auto=webp&s=1f0f418fdb3405196862b7a7280a9dd842ebf431', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=640&crop=smart&auto=webp&s=51fe2f830643e486ffcf2effb92ea6a480e64cf2', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=960&crop=smart&auto=webp&s=da5d21efd172a482d374a35af6e4b18d91d88a57', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?width=1080&crop=smart&auto=webp&s=81ce8a59d0a8d5b0aaa7ba3ad6f158c17168f069', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/UeXieNkboMcAP3kmecNJZk2I2AO_-fluFc5hU8_aXv8.jpg?auto=webp&s=e33ff0361b09057528ce8c32d5d9c5021a547f74', 'width': 1536}, 'variants': {}}]}
|
||
Itโs been 1000 releases and 5000 commits in llama.cpp
| 638 |
1000th release of llama.cpp
Almost 5000 commits. (4998)
It all started with llama 1 leak.
Thanks you team. Someone tag โem if you know their handle.
| 2025-03-30T16:04:30 |
https://github.com/ggml-org/llama.cpp/releases
|
Yes_but_I_think
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnfpnr
| false | null |
t3_1jnfpnr
|
/r/LocalLLaMA/comments/1jnfpnr/its_been_1000_releases_and_5000_commits_in/
| false | false | 638 |
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
|
|
Agent - A Local Computer-Use Operator for macOS
| 27 |
We've just open-sourced Agent, our framework for running computer-use workflows across multiple apps in isolated macOS/Linux sandboxes.
After launching Computer a few weeks ago, we realized many of you wanted to run complex workflows that span multiple applications. Agent builds on Computer to make this possible. It works with local Ollama models (if you're privacy-minded) or cloud providers like OpenAI, Anthropic, and others.
**Why we built this:**
We kept hitting the same problems when building multi-app AI agents - they'd break in unpredictable ways, work inconsistently across environments, or just fail with complex workflows. So we built Agent to solve these headaches:
โขโ โ It handles complex workflows across multiple apps without falling apart
โขโ โ You can use your preferred model (local or cloud) - we're not locking you into one provider
โขโ โ You can swap between different agent loop implementations depending on what you're building
โขโ โ You get clean, structured responses that work well with other tools
**The code is pretty straightforward:**
`async with Computer() as macos_computer:`
`agent = ComputerAgent(`
`computer=macos_computer,`
`loop=AgentLoop.OPENAI,`
`model=LLM(provider=LLMProvider.OPENAI)`
`)`
`tasks = [`
`"Look for a repository named trycua/cua on GitHub.",`
`"Check the open issues, open the most recent one and read it.",`
`"Clone the repository if it doesn't exist yet."`
`]`
`for i, task in enumerate(tasks):`
`print(f"\nTask {i+1}/{len(tasks)}: {task}")`
`async for result in agent.run(task):`
`print(result)`
`print(f"\nFinished task {i+1}!")`
**Some cool things you can do with it:**
โขโ โ Mix and match agent loops - OpenAI for some tasks, Claude for others, or try our experimental OmniParser
โขโ โ Run it with various models - works great with OpenAI's computer\_use\_preview, but also with Claude and others
โขโ โ Get detailed logs of what your agent is thinking/doing (super helpful for debugging)
โขโ โ All the sandboxing from Computer means your main system stays protected
**Getting started is easy:**
pip install "cua-agent\[all\]"
\# Or if you only need specific providers:
pip install "cua-agent\[openai\]" # Just OpenAI
pip install "cua-agent\[anthropic\]" # Just Anthropic
pip install "cua-agent\[omni\]" # Our experimental OmniParser
We've been dogfooding this internally for weeks now, and it's been a game-changer for automating our workflows. **Grab the code at** [**https://github.com/trycua/cua**](https://github.com/trycua/cua)
Would love to hear your thoughts LocalLLaMA community! :)
| 2025-03-30T16:21:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jng3cz/agent_a_local_computeruse_operator_for_macos/
|
sandropuppo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jng3cz
| false | null |
t3_1jng3cz
|
/r/LocalLLaMA/comments/1jng3cz/agent_a_local_computeruse_operator_for_macos/
| false | false |
self
| 27 |
{'enabled': False, 'images': [{'id': '5GJTZnc4ZxMXQwCeu-CNlmHRCO0nJ5-InN0X_d2k-U4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=108&crop=smart&auto=webp&s=c50b45dfbc4241e576e8f487985cf78466899a12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=216&crop=smart&auto=webp&s=d97612e928e5535736471f2921c5102ae547edd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=320&crop=smart&auto=webp&s=8a3a6927f206038e23f7ddc4d17cbfd8d6c8854e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=640&crop=smart&auto=webp&s=f8abf618996a2866c61e0af6e2ba58277a43cd91', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=960&crop=smart&auto=webp&s=af95c4d94a380e213f463052163a7cff8bca85a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?width=1080&crop=smart&auto=webp&s=03e5af8dbd63bdfe8016a4847cd277f811d71feb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9oDAWLZq0fjf-dMhsd7lqoXAzf2547nE--PGf3Fb62w.jpg?auto=webp&s=ea957c0790814aaa564461c3ac7f5506748970e8', 'width': 1200}, 'variants': {}}]}
|
[2503.18908] FFN Fusion: Rethinking Sequential Computation in Large Language Models
| 8 | 2025-03-30T16:29:37 |
https://arxiv.org/abs/2503.18908
|
Thrumpwart
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1jng9r9
| false | null |
t3_1jng9r9
|
/r/LocalLLaMA/comments/1jng9r9/250318908_ffn_fusion_rethinking_sequential/
| false | false |
default
| 8 | null |
|
Synthesize Multimodal Thinking Datasets for Spatial Reasoning
| 11 |
Spatial reasoning is a key capability for embodied AI applications like robotics.
After recent updates to VQASynth, you can synthesize R1-style CoT reasoning traces to train your VLM to use test-time compute for enhanced spatial reasoning.
Additional updates help to apply VGGT for better 3D scene reconstruction and Molmo with point prompting for SAM2.
https://preview.redd.it/125lq3iqture1.png?width=957&format=png&auto=webp&s=647c3c2efb8e1cef93b8d2eda62266a41313166d
Stay tuned for the "SpaceThinker" dataset and VLM coming soon!
SpaceThinker data will be formatted similar to NVIDIA's [https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset-v1](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset-v1)
The SpaceThinker model will use NVIDIA's [https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) as the LLM backbone for training a LLaVA-style VLM similar to this colab: [https://colab.research.google.com/drive/1R64daHgR50GnxH3yn7mcs8rnldWL1ZxF?usp=sharing](https://colab.research.google.com/drive/1R64daHgR50GnxH3yn7mcs8rnldWL1ZxF?usp=sharing)
Make multimodal thinking data from any HF image datasets: [https://github.com/remyxai/VQASynth](https://github.com/remyxai/VQASynth)
More discussion in HF: [https://huggingface.co/spaces/open-r1/README/discussions/10](https://huggingface.co/spaces/open-r1/README/discussions/10)
| 2025-03-30T16:40:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jngirf/synthesize_multimodal_thinking_datasets_for/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jngirf
| false | null |
t3_1jngirf
|
/r/LocalLLaMA/comments/1jngirf/synthesize_multimodal_thinking_datasets_for/
| false | false | 11 |
{'enabled': False, 'images': [{'id': '7QsRThCfp8A6wtIMh2K6kzb51LqJj_h8cAR7_jGoxwE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=108&crop=smart&auto=webp&s=68a541c7600466a69974e4c1328873ea837f6bc0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=216&crop=smart&auto=webp&s=9605c239baba7009f15e8d1e3d2997999835208a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=320&crop=smart&auto=webp&s=c1fb6975b8680891ca3808db5e38f79a839e9d27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=640&crop=smart&auto=webp&s=fe96bc6532b66babfcefeff7976a82063fdfc54a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=960&crop=smart&auto=webp&s=25760eb6ad933089a495a38626a7847b3aa5c796', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?width=1080&crop=smart&auto=webp&s=f252169187082deaaffa1e269f7da75a88dd6092', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kj_Ee04DEpHPCSeKKwKjPg3O1y9d99_Y7wBBdZAkEqs.jpg?auto=webp&s=1e5c845671abb92a4e4654c956b5eafd8123dab7', 'width': 1200}, 'variants': {}}]}
|
|
I built a coding agent that allows qwen2.5-coder to use tools
| 97 | 2025-03-30T16:41:24 |
bobaburger
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jngj5u
| false | null |
t3_1jngj5u
|
/r/LocalLLaMA/comments/1jngj5u/i_built_a_coding_agent_that_allows_qwen25coder_to/
| false | false | 97 |
{'enabled': True, 'images': [{'id': '1jgUpM8ge-WF5HuKQ1bMYKN_I7b1eWR9oQdfrGXO26s', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=108&crop=smart&auto=webp&s=220681cab55e4178ba8ff69e3cd2011c268ce217', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=216&crop=smart&auto=webp&s=f4574f89f87a6f41044137b02aac6d0eaed32400', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=320&crop=smart&auto=webp&s=5350360f476729951e79b7c0f524712ca885d25c', 'width': 320}, {'height': 466, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=640&crop=smart&auto=webp&s=5447b0990d64ea3d82b01889605650baf3b6948d', 'width': 640}, {'height': 700, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=960&crop=smart&auto=webp&s=5c2973b8c949382071b15571fbe7828a2d14f1bc', 'width': 960}, {'height': 787, 'url': 'https://preview.redd.it/1erih6euuure1.png?width=1080&crop=smart&auto=webp&s=9b1f38f3ba4c783610082ffa9c53d5157b3ab3b6', 'width': 1080}], 'source': {'height': 1824, 'url': 'https://preview.redd.it/1erih6euuure1.png?auto=webp&s=244a36687d9f6d346d6c116119fc679094fd19c3', 'width': 2500}, 'variants': {}}]}
|
|||
Text to Sound FX?
| 2 |
Do these exist? Seams all the TTS are focusing on real speech, but I'm looking for sound fx like you'd use in video games, movies, etc.. Closest I've found is ElevenLabs, but phew that's expensive. I've only 20GB VRAM to work with though.
| 2025-03-30T16:52:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jngs07/text_to_sound_fx/
|
krileon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jngs07
| false | null |
t3_1jngs07
|
/r/LocalLLaMA/comments/1jngs07/text_to_sound_fx/
| false | false |
self
| 2 | null |
Dou (้) - Visual Knowledge Organization and Analysis Tool
| 30 | 2025-03-30T16:53:31 |
https://github.com/shokuninstudio/Dou
|
shokuninstudio
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jngswb
| false | null |
t3_1jngswb
|
/r/LocalLLaMA/comments/1jngswb/dou_้_visual_knowledge_organization_and_analysis/
| false | false | 30 |
{'enabled': False, 'images': [{'id': 'Tsa16t6QtVhpweHlIbPMtBbyxKTtvVQHQDF4daloZd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=108&crop=smart&auto=webp&s=1e4e776fc4dfaf4b321c5195051d69fe31d29ac7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=216&crop=smart&auto=webp&s=5976a968454c44069f7eef5df2814c4dae993245', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=320&crop=smart&auto=webp&s=3f990af6d6d7f3560b57d853451a1958e8dcef0f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=640&crop=smart&auto=webp&s=5a2a61ea030a8d438936d0ede1ad351db268caa9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=960&crop=smart&auto=webp&s=a957acc278a887bf4f657de89ca46b03bc9d853b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?width=1080&crop=smart&auto=webp&s=99f5c8565e0d0508e50f707e9d23456ba985ff0b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GOPYVA45L4Hs0UVngiY3mnW7r51k7UY71hhV_hEB_XU.jpg?auto=webp&s=1f0d890de640a50a07b44002dd7421c518e3c1ad', 'width': 1200}, 'variants': {}}]}
|
||
Can't get llama-cpp-python to work on Intel i5 Macbook
| 1 |
[removed]
| 2025-03-30T16:55:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jngu8i/cant_get_llamacpppython_to_work_on_intel_i5/
|
CampTouchThis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jngu8i
| false | null |
t3_1jngu8i
|
/r/LocalLLaMA/comments/1jngu8i/cant_get_llamacpppython_to_work_on_intel_i5/
| false | false |
self
| 1 | null |
New Storywriting Software (opensource)
| 1 |
[removed]
| 2025-03-30T17:07:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnh4wa/new_storywriting_software_opensource/
|
falconandeagle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnh4wa
| false | null |
t3_1jnh4wa
|
/r/LocalLLaMA/comments/1jnh4wa/new_storywriting_software_opensource/
| false | false |
self
| 1 | null |
Top WebAPP UI Model
| 1 |
I am looking for a model that is good at UI and making UX decisions. Most models you have to explcitity tell the model exactly what size you want something, where exactly it should be place. Instead of just saying, does anyone hae any reccomended models that would make the UI/UX better for my web app. Nomrally I just point sonnet at something like a design language and say follow this. If anyone has some top UI/UX experience, I'd appreciate it!
| 2025-03-30T17:12:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnh8kt/top_webapp_ui_model/
|
No-Fig-8614
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnh8kt
| false | null |
t3_1jnh8kt
|
/r/LocalLLaMA/comments/1jnh8kt/top_webapp_ui_model/
| false | false |
self
| 1 | null |
Hey guys so anyome know some good prompt for RP ?
| 1 |
Alright so look im new to this in general , I used chracter ai for some time and then left it, I'm getting into the ai rp stuff agai. And like I wanted to know a good Luke you know "ai prompt" you know that's given to the actual ai behind the chat ? . I want a good one you know that works god with the rp. Like you guys will know lore bout this buttt you kmow please help me arround
| 2025-03-30T17:20:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnhfok/hey_guys_so_anyome_know_some_good_prompt_for_rp/
|
u_GalacticVoyager
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnhfok
| false | null |
t3_1jnhfok
|
/r/LocalLLaMA/comments/1jnhfok/hey_guys_so_anyome_know_some_good_prompt_for_rp/
| false | false |
self
| 1 | null |
Llama 3.2 going insane on Facebook
| 50 |
It kept going like this.
| 2025-03-30T17:39:40 |
https://www.reddit.com/gallery/1jnhuy3
|
Reader3123
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnhuy3
| false | null |
t3_1jnhuy3
|
/r/LocalLLaMA/comments/1jnhuy3/llama_32_going_insane_on_facebook/
| false | false | 50 | null |
|
Gemini 2.5 pro - I tried upload via API 140k TXT file but getting error
| 0 |
Hello
Sorry that I have to write here but google AI community is 20x times smaller and gemini 2.5 is free anyway ;) and you probably are using it here.
I tried upload via API 140k tokens txt file but getting error is working fine is I will upload via API small txt files for instance 1k tokens.
API is reporting
--- Checking limits for model: gemini-2.5-pro-exp-03-25 ---
Reported Display Name: Gemini 2.5 Pro Experimental 03-25
Supported Methods: ['generateContent', 'countTokens']
Input Token Limit: 1048576
Output Token Limit: 65536
Hello
I tried upload via API 140k tokens txt file but getting error is working fine is I will upload via API small txt files for instance 1k tokens.
API is reporting
--- Checking limits for model: gemini-2.5-pro-exp-03-25 ---
Reported Display Name: Gemini 2.5 Pro Experimental 03-25
Supported Methods: ['generateContent', 'countTokens']
Input Token Limit: 1048576
Output Token Limit: 65536
Gemini 2.5 pro should have 1m context tokens I thought?
Or maybe I am doing something wrong?
Via AI studio of course working fine ...
error
Full InternalServerError object: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting
| 2025-03-30T17:48:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jni1vq/gemini_25_pro_i_tried_upload_via_api_140k_txt/
|
Healthy-Nebula-3603
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jni1vq
| false | null |
t3_1jni1vq
|
/r/LocalLLaMA/comments/1jni1vq/gemini_25_pro_i_tried_upload_via_api_140k_txt/
| false | false |
self
| 0 | null |
How do you interact with LLMs?
| 0 |
I'm curious about how others interact with their LLMs day-to-day. SPECIFICALLY, for coding and development tasks.
Does everyone use tools like Windsurf or Curser for AI coding assistance? Or do you have your own unique approach?
I found the integrated IDE solutions to be clunky and limiting. So, I built my own VS Code extension, "Concatenate for AI, " which lets me manually generate and control the context I send to LLMs.
The extension does one thing well: it lets me select multiple files in VS Code and bundle them into a correctly formatted (using markdown code blocks with the file type and file path) that I copy and paste into the LLM I'm working with.
Works exceptionally well with Google Gemini 2.5
I've found that being deliberate about context has given me dramatically better results than letting an integration decide what to send.
Do you use the fancy AI coding assistants, or have you found other better methods for your workflow? Obviously, every job and task is different, what do you do and what tools do you use?
| 2025-03-30T17:54:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jni70u/how_do_you_interact_with_llms/
|
nooblito
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jni70u
| false | null |
t3_1jni70u
|
/r/LocalLLaMA/comments/1jni70u/how_do_you_interact_with_llms/
| false | false |
self
| 0 | null |
Dual DGX Spark for inference - performance scaling vs multi-GPU setups
| 1 |
[removed]
| 2025-03-30T18:21:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnitt0/dual_dgx_spark_for_inference_performance_scaling/
|
random-username-1911
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnitt0
| false | null |
t3_1jnitt0
|
/r/LocalLLaMA/comments/1jnitt0/dual_dgx_spark_for_inference_performance_scaling/
| false | false |
self
| 1 | null |
Why is table extraction still not solved by modern multimodal models?
| 6 |
There is a lot of hype around multimodal models, such as Qwen 2.5 VL or Omni, GOT, SmolDocling, etc. I would like to know if others made a similar experience in practice: While they can do impressive things, they still struggle with table extraction, in cases which are straight-forward for humans.
Attached is a simple example, all I need is a reconstruction of the table as a flat CSV, preserving empty all empty cells correctly. Which open source model is able to do that?
https://preview.redd.it/plxeduw4gvre1.png?width=1650&format=png&auto=webp&s=4335df4bb50a101113a4e8060212a351cb2111a0
| 2025-03-30T18:39:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnj8ov/why_is_table_extraction_still_not_solved_by/
|
Electronic-Letter592
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnj8ov
| false | null |
t3_1jnj8ov
|
/r/LocalLLaMA/comments/1jnj8ov/why_is_table_extraction_still_not_solved_by/
| false | false | 6 | null |
|
Benchmark: RTX 3090, 4090, and even 4080 are surprisingly strong for 1-person QwQ-32B inference. (but 5090 not yet)
| 105 |
I don't want to send all of my code to any outside company, but I still want to use AI code completion. Accordingly, I was curious how fast various GPUs would be for hosting when there's only 1 user: me. I used vLLM and `QwQ-32B-Q4_K_M` for benchmarking.
`median_ttft_ms` measures how long it takes for the GPU to handle the context and parse my request. And then `median_otps` is how many output tokens the GPU can generate per second. (OTPS = Output Tokens Per Second) Overall, the `median_ttft_ms` values were all <1s unless the card was overloaded and I think they will rarely matter in practice. That means the race is on for the highest OTPS.
As expected, a H200 is fast with 334ms + 30 OTPS. The H100 NVL is still fast with 426ms + 23 OTPS. The "old" H100 with HBM3 is similar at 310ms + 22 OTPS.
But I did not expect 2x RTX 4080 to score 383ms + 33 OTPS, which is really close to the H200 and that's somewhat insane if you consider that I'm comparing a 34000โฌ datacenter product with a 1800โฌ home setup. An old pair of 2x RTX 3090 is also workable at 564ms + 28 OTPS. And a (watercooled and gently overclocked) RTX 3090 TI rocked the ranking with 558ms + 36 OTPS. You can also clearly see that vLLM is not fully optimized for the RTX 5090 yet, because there the official docker image did not work (yet) and I had to compile from source and still the results are somewhat meh with 517ms + 18 TOPS, which is slightly slower than a single 4090.
You'll notice that the consumer GPUs are slower in the initial context and request parsing. That makes sense because that task is highly parallel, i.e. what datacenter products were optimized for. But due to higher clock speeds and more aggressive cooling, consumer GPUs outcompete both H100 and H200 at output token generation, which is the sequential part of the task.
Here's my raw result JSONs from `vllm/benchmarks/benchmark_serving.py` and a table with even more hardware variations: [https://github.com/DeutscheKI/llm-performance-tests](https://github.com/DeutscheKI/llm-performance-tests)
Anyway, my take-aways from this would be:
1. RAM clock dominates everything. OC for the win!
2. Go with 2x 4080 over a single 4090 or 5090.
| 2025-03-30T19:01:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnjrdk/benchmark_rtx_3090_4090_and_even_4080_are/
|
fxtentacle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnjrdk
| false | null |
t3_1jnjrdk
|
/r/LocalLLaMA/comments/1jnjrdk/benchmark_rtx_3090_4090_and_even_4080_are/
| false | false |
self
| 105 |
{'enabled': False, 'images': [{'id': 'bwk65Xqn84glZ0DQg0CbW-tn0VJnpTArlIQrfH3t6fg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=108&crop=smart&auto=webp&s=e7113e607573e1959a8dd6c26dd4b115d85480c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=216&crop=smart&auto=webp&s=96a12e6a22daea8acdf91357ee0b33e1910eaecb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=320&crop=smart&auto=webp&s=4fa128061458893bcbbe4a847eb889bf5a38034d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=640&crop=smart&auto=webp&s=9e85e1a766cd10898cae7d7e7865759acacba2ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=960&crop=smart&auto=webp&s=df8ab9a5ec378ffd84b983bf19224f34cb49b53e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?width=1080&crop=smart&auto=webp&s=d1bf46bf4c270206c179e1b6ee133c778c898e7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cTAF9TBQZn0zO0VP6WBusQ9eIsJv4f-Mhvsmo-FCJH8.jpg?auto=webp&s=c689087c8c44a640e4978c953d9c480c2f7bbca3', 'width': 1200}, 'variants': {}}]}
|
Using a locally-run LLM during exam
| 0 |
So, I have an upcoming coding exam in abt a week. It's completely open-book, and they allow us to bring a flash drive with whatever help files we need, however the computers themselves are restricted to only access the exam server and not the general internet. I think it would be absolutely hilarious if I brought a flash-drive with some lightweight model and just had it code everything for me (it's very simple things, no complex tasks. I could do it myself, but it wouldn't be as funny). There's no rule explicitly forbidding that in the exam - they're very lax.
How feasible is doing that? The computers in the exam hall aren't insanely powerful, but they're decent.
| 2025-03-30T19:32:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnkgsx/using_a_locallyrun_llm_during_exam/
|
oMGalLusrenmaestkaen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnkgsx
| false | null |
t3_1jnkgsx
|
/r/LocalLLaMA/comments/1jnkgsx/using_a_locallyrun_llm_during_exam/
| false | false |
self
| 0 | null |
Help with a Docker Issue please.
| 1 |
[removed]
| 2025-03-30T19:47:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnktmm/help_with_a_docker_issue_please/
|
GeekAdventureTeam
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnktmm
| false | null |
t3_1jnktmm
|
/r/LocalLLaMA/comments/1jnktmm/help_with_a_docker_issue_please/
| false | false |
self
| 1 | null |
What is this spider model from meta??,is it really from meta?
| 9 |
I was randomly playing around with LMArena, testing various models' emotional and intellectual responses. During my testing, I found one model particularly good in emotional and it explicitly gave few books title related to the subject of discussion. When I asked, "Who are you?", it replied, "I am an LLM developed by Meta AI" (refer to image 1).
After a few conversations, when I had to choose the better model between two, It revealed the name as "Spider" (refer to image 2).
I couldn't find any information online about Meta AI releasing a model named Spider. Could it be that they are secretly developing this LLM and testing it on LMArena for evaluation purposes?
| 2025-03-30T19:48:14 |
https://www.reddit.com/gallery/1jnktzm
|
oru____umilla
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnktzm
| false | null |
t3_1jnktzm
|
/r/LocalLLaMA/comments/1jnktzm/what_is_this_spider_model_from_metais_it_really/
| false | false | 9 | null |
|
Where are the TTS at?
| 2 |
I know Piper TTS, Kokoro, Praler etc but I don't know any TTS that TRULY can synthesize voice.
Voice cloning is one thing and got pretty convincing and some results of the mentioned have a pretty good quality, at least in English, however I am looking for a TTS that can
* Understand the text and tone it accordingly (dynamic)
* And/or can be directed with tags like <whisper>, <sad>, <joyful>, <angry> etc
* Preferably can also really synthesize a voice from a prompt
* Maybe Loras or something trained for a specific character?
* Can run locally
There was some progress in 23 and 24 but it seems like the whole development stalled out. While image gen can now produce quite good videos and text gen is getting something new all few weeks, it feels like there is little to no progress in AI voice.
| 2025-03-30T20:03:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnl793/where_are_the_tts_at/
|
dreamyrhodes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnl793
| false | null |
t3_1jnl793
|
/r/LocalLLaMA/comments/1jnl793/where_are_the_tts_at/
| false | false |
self
| 2 | null |
We've built JFK RAG Arena
| 1 |
[removed]
| 2025-03-30T20:03:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnl799/weve_built_jfk_rag_arena/
|
Less_Potential386
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnl799
| false | null |
t3_1jnl799
|
/r/LocalLLaMA/comments/1jnl799/weve_built_jfk_rag_arena/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'B81ewrL9G9C-RakVz0IIzzvR94eTte-cYDo2UH0-pGw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=108&crop=smart&auto=webp&s=6c6239f0aff3cc0baaae4bf2ca79282e596fb66a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=216&crop=smart&auto=webp&s=9162b370252d6b655259db4d76048f5517ce4ac4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=320&crop=smart&auto=webp&s=3ad6a3fc929cb4ba53002ef9504f7987ed8673a8', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=640&crop=smart&auto=webp&s=c92da69cc0fee4ec0e06f0efa920b1103d685aeb', 'width': 640}, {'height': 505, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=960&crop=smart&auto=webp&s=b2888bc0b3f73deb1353dbbc265e930d5ffbf19a', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?width=1080&crop=smart&auto=webp&s=01dd6dd607ef826b7dca170837f80129491d9cd8', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/Py33t6iGedS9FLBkYEO8PNxQq8q5DlPhbzbwIyBzb-w.jpg?auto=webp&s=92fdca095be9e2f72de8b63f9061f48232e24906', 'width': 2392}, 'variants': {}}]}
|
We experimented with developing cross language voice cloning TTS for Indic Languages
| 22 |
We at our startup FuturixAI experimented with developing cross language voice cloning TTS models for Indic Languages
Here is the result
Currently developed for Hindi, Telegu and Marathi
| 2025-03-30T20:11:03 |
https://v.redd.it/h9sc3kdiwvre1
|
Aquaaa3539
|
/r/LocalLLaMA/comments/1jnld6i/we_experimented_with_developing_cross_language/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnld6i
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h9sc3kdiwvre1/DASHPlaylist.mpd?a=1746087074%2CYzJlMzY1MWRjYjg0ZjYxMWNiMmIyZWM0Y2EwOTY0ZDdkOGVhNWRmMDI0OGVhYjdmMzAyYjQzYTM4ZjU5MjBkNQ%3D%3D&v=1&f=sd', 'duration': 118, 'fallback_url': 'https://v.redd.it/h9sc3kdiwvre1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/h9sc3kdiwvre1/HLSPlaylist.m3u8?a=1746087074%2COGE4YmZhNGE4NjkxYjY2ZGI2ZDk1ZTljM2VhZGMxN2I0ZGE3NjVlMTgzYWI1MTQ0ZTMyOGEyZDdmZGI2ZmYwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/h9sc3kdiwvre1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jnld6i
|
/r/LocalLLaMA/comments/1jnld6i/we_experimented_with_developing_cross_language/
| false | false | 22 |
{'enabled': False, 'images': [{'id': 'ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=fbc34e27316ca4f901c575df357f7899541ef8e7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=216&crop=smart&format=pjpg&auto=webp&s=71ff34c14960010f7d729eaed7813d27227e4ef0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=320&crop=smart&format=pjpg&auto=webp&s=fe6997aa447ce15490c722f0487df7e9f05a4785', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=640&crop=smart&format=pjpg&auto=webp&s=efa2e69b93cc256e0378847c9595bc488ed6257d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=960&crop=smart&format=pjpg&auto=webp&s=d0362bdfc9a6035a66754f5d0b8897dfd9b0d965', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=907cb01987999d2b2c5639436f4546dbfa0859e7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZXB1a3Y5ZGl3dnJlMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?format=pjpg&auto=webp&s=adc6128ad3ff06d4d29aa097b6b99d48a4f31886', 'width': 1920}, 'variants': {}}]}
|
|
RX 9070 XT llama.cpp (ROCm) files here
| 1 |
[removed]
| 2025-03-30T20:33:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnlwed/rx_9070_xt_llamacpp_rocm_files_here/
|
Heavy-Cap4626
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnlwed
| false | null |
t3_1jnlwed
|
/r/LocalLLaMA/comments/1jnlwed/rx_9070_xt_llamacpp_rocm_files_here/
| false | false |
self
| 1 | null |
Free Search: Updates and Improvements.
| 26 |
Hi all,
Last week, I open sourced Free Search API. It allows sourcing results from top search engines (including google, bing) for free. It uses searxng instances for this purpose.
I was overwhelmed by community's response and I am glad for all the support and suggestions. Today, I have pushed several improvements that make this API more stable. These improvements include
1) Parallel scrapping of search results for faster response
2) Markdown formatting of search results
3) Prioritizing SearXNG instances that have faster google response time
4) Update/Get endpoints for searxng instances.ย
Github:ย [https://github.com/HanzlaJavaid/Free-Search/tree/main](https://github.com/HanzlaJavaid/Free-Search/tree/main)
Try the deployed version:ย [https://freesearch.replit.app/docs](https://freesearch.replit.app/docs)
I highly appreciate PRs, issues, stars, and any kind of feedback.
| 2025-03-30T20:37:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnlzb6/free_search_updates_and_improvements/
|
Far-Celebration-470
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnlzb6
| false | null |
t3_1jnlzb6
|
/r/LocalLLaMA/comments/1jnlzb6/free_search_updates_and_improvements/
| false | false |
self
| 26 | null |
Memory Management Issues with Llama 3.2 3B checkpoint with PyTorch
| 1 |
[removed]
| 2025-03-30T20:52:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnmc5q/memory_management_issues_with_llama_32_3b/
|
ml_ds123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnmc5q
| false | null |
t3_1jnmc5q
|
/r/LocalLLaMA/comments/1jnmc5q/memory_management_issues_with_llama_32_3b/
| false | false |
self
| 1 | null |
Are there ready-to-use RAG (w local llm) projects for wikis?
| 8 |
Pretty much the title. Wiki pages are somewhat standardized, is there already some kind project, for throwing the content into the RAG?
| 2025-03-30T20:54:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnmdlk/are_there_readytouse_rag_w_local_llm_projects_for/
|
la_baguette77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnmdlk
| false | null |
t3_1jnmdlk
|
/r/LocalLLaMA/comments/1jnmdlk/are_there_readytouse_rag_w_local_llm_projects_for/
| false | false |
self
| 8 | null |
Memory Management Issues with Llama 3.2 3B checkpoint with PyTorch
| 1 |
[removed]
| 2025-03-30T20:56:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnmf2e/memory_management_issues_with_llama_32_3b/
|
ml_ds123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnmf2e
| false | null |
t3_1jnmf2e
|
/r/LocalLLaMA/comments/1jnmf2e/memory_management_issues_with_llama_32_3b/
| false | false |
self
| 1 | null |
Memory Management Issues with Llama 3.2 3B checkpoint with PyTorch
| 1 |
[removed]
| 2025-03-30T20:57:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnmfjv/memory_management_issues_with_llama_32_3b/
|
ml_ds123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnmfjv
| false | null |
t3_1jnmfjv
|
/r/LocalLLaMA/comments/1jnmfjv/memory_management_issues_with_llama_32_3b/
| false | false |
self
| 1 | null |
Are LLMs good for data extraction?
| 1 |
[removed]
| 2025-03-30T21:43:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnnh3j/are_llms_good_for_data_extraction/
|
Gregory-Wolf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnnh3j
| false | null |
t3_1jnnh3j
|
/r/LocalLLaMA/comments/1jnnh3j/are_llms_good_for_data_extraction/
| false | false |
self
| 1 | null |
Ollama/LLM reverting to GPU on reboot
| 1 |
I have a recurring problem with my LLM server. Every time I reboot after setting it up, it runs the LLM on the CPU instead of the GPU. I've rebuilt the server multiple times thinking something was wrong with the driver, updates, or whatever. It seems to be a config problem, though. When I search online, the popular answer is the driver or the GPU passthrough.
I know there's an Ollama reddit page, but I lurk around here more often than there.
Server:
Proxmox VM on Ubuntu 24.04 (Older dual processor Dell 7820 with PCI Gen 3)
20 Xeon cores
64GB RAM
3060 GPU- Directly connected as "Raw" device. IOMMU configured in Proxmox
400GB allocated on a SSD
Running ollama with Gemma 3 18b model with Open Webui docker container.
Has anyone ran into this issue or aware of a step that I may have missed? Looking for help. Thanks.
| 2025-03-30T22:45:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnotbr/ollamallm_reverting_to_gpu_on_reboot/
|
DiscombobulatedAdmin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnotbr
| false | null |
t3_1jnotbr
|
/r/LocalLLaMA/comments/1jnotbr/ollamallm_reverting_to_gpu_on_reboot/
| false | false |
self
| 1 | null |
Ollama/LLM reverting to CPU after reboot.
| 0 |
I have a recurring problem with my LLM server. Every time I reboot after setting it up, it runs the LLM on the CPU instead of the GPU. I've rebuilt the server multiple times thinking something was wrong with the driver, updates, or whatever. It seems to be a config problem, though. When I search online, the popular answer is the driver or the GPU passthrough.
I know there's an Ollama reddit page, but I lurk around here more often than there.
Server:
Proxmox VM on Ubuntu 24.04 (Older dual processor Dell 7820 with PCI Gen 3)
20 Xeon cores
64GB RAM
3060 GPU- Directly connected as "Raw" device. IOMMU configured in Proxmox
400GB allocated on a SSD
Running ollama with Gemma 3 18b model with Open Webui docker container.
Has anyone ran into this issue or aware of a step that I may have missed? Looking for help. Thanks.
| 2025-03-30T22:47:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnounl/ollamallm_reverting_to_cpu_after_reboot/
|
DiscombobulatedAdmin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnounl
| false | null |
t3_1jnounl
|
/r/LocalLLaMA/comments/1jnounl/ollamallm_reverting_to_cpu_after_reboot/
| false | false |
self
| 0 | null |
MLX fork with speculative decoding in server
| 73 |
I forked mlx-lm and ported the speculative decoding from the generate command to the server command, so now we can launch an OpenAI compatible completions endpoint with it enabled. Iโm working on tidying the tests up to submit PR to upstream but wanted to announce here in case anyone wanted this capability now. I get a 90% speed increase when using qwen coder 0.5 as draft model and 32b as main model.
```
mlx_lm.server --host localhost --port 8080 --model ./Qwen2.5-Coder-32B-Instruct-8bit --draft-model ./Qwen2.5-Coder-0.5B-8bit
```
https://github.com/intelligencedev/mlx-lm/tree/add-server-draft-model-support/mlx_lm
| 2025-03-30T23:22:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnplb1/mlx_fork_with_speculative_decoding_in_server/
|
LocoMod
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnplb1
| false | null |
t3_1jnplb1
|
/r/LocalLLaMA/comments/1jnplb1/mlx_fork_with_speculative_decoding_in_server/
| false | false |
self
| 73 |
{'enabled': False, 'images': [{'id': 'qwfw7NsZ7VJaHbGXTp0A8QoZFgxxLTZDIq_z9dx7Yxg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=108&crop=smart&auto=webp&s=de73cfea5013bd4af465eabf47974ef4447af82b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=216&crop=smart&auto=webp&s=bb9439f5eb94027bce5d20583d6230542cda2cb5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=320&crop=smart&auto=webp&s=497be46fcac5e51bcd0336751370c3fd2d20687c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=640&crop=smart&auto=webp&s=cf01dd2bf87114ffa4c32c052555725c6f1479a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=960&crop=smart&auto=webp&s=46f9fbbd0f4abad56b9ea688d809e32e2c7341a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?width=1080&crop=smart&auto=webp&s=97e76e23e6c28593c6e3b60bd9a8b99e9a306a85', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TV1Hxxy6j30w_InyenPwIcY9yUj9YRrTwelgtZYfodQ.jpg?auto=webp&s=b4ee9405eef81cbcc60d1f2f02ad4cea781e5901', 'width': 1200}, 'variants': {}}]}
|
What Can I Do With a Yearโs Worth of Journal Entries?
| 1 |
[removed]
| 2025-03-30T23:42:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnpzjv/what_can_i_do_with_a_years_worth_of_journal/
|
Sad_Opening_7211
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnpzjv
| false | null |
t3_1jnpzjv
|
/r/LocalLLaMA/comments/1jnpzjv/what_can_i_do_with_a_years_worth_of_journal/
| false | false |
self
| 1 | null |
LLaMa Bad Actor Prevention
| 1 |
[removed]
| 2025-03-31T00:08:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnqi6o/llama_bad_actor_prevention/
|
bigbatter69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnqi6o
| false | null |
t3_1jnqi6o
|
/r/LocalLLaMA/comments/1jnqi6o/llama_bad_actor_prevention/
| false | false |
self
| 1 | null |
Am I the only one using LLMs with greedy decoding for coding?
| 8 |
I've been using greedy decoding (i.e. always choose the most probable token by setting top\_k=0 or temperature=0) for coding tasks. Are there better decoding / sampling params that will give me better results?
| 2025-03-31T00:15:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnqmsg/am_i_the_only_one_using_llms_with_greedy_decoding/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnqmsg
| false | null |
t3_1jnqmsg
|
/r/LocalLLaMA/comments/1jnqmsg/am_i_the_only_one_using_llms_with_greedy_decoding/
| false | false |
self
| 8 | null |
Terrifying chat with OpenAI about AGI... Thoughts?
| 1 |
[removed]
| 2025-03-31T00:19:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnqps7/terrifying_chat_with_openai_about_agi_thoughts/
|
Responsible-Clue-687
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnqps7
| false | null |
t3_1jnqps7
|
/r/LocalLLaMA/comments/1jnqps7/terrifying_chat_with_openai_about_agi_thoughts/
| false | false |
self
| 1 | null |
Claude making up human tags
| 6 |
I've been extremely unimpressed with 3.7. I'm currently using it on Cursor and it's now just continuously having a fake conversation between itself and me. Text from me is being put into human tags, with the text fully made up. Anyone seen this?
I'm about to switch back to a local model
| 2025-03-31T00:22:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnqr9j/claude_making_up_human_tags/
|
chespirito2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnqr9j
| false | null |
t3_1jnqr9j
|
/r/LocalLLaMA/comments/1jnqr9j/claude_making_up_human_tags/
| false | false |
self
| 6 | null |
Macbook M2 with 8gb ram
| 4 |
Not asking for myself, but for a friend. He has a M2 macbook with 8gb ram and wants to play with some smaller models.
The problem is, I have no clue what will fit in that space. Gemma 3 27b and QwQ-32b (which is my bread and butter) are obviously right out.
Whatโs the best performing option that will fit into that limited amount of vram? I presume around 4gb or so, depending on how much ram his OS takes up.
| 2025-03-31T00:34:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnqzp2/macbook_m2_with_8gb_ram/
|
DepthHour1669
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnqzp2
| false | null |
t3_1jnqzp2
|
/r/LocalLLaMA/comments/1jnqzp2/macbook_m2_with_8gb_ram/
| false | false |
self
| 4 | null |
How could I help improve llama.cpp?
| 16 |
Hello, I'm a Computer Engineering student. I have some experience with C and C++, but I've never worked on open-source projects as large as llama.cpp.
I'd like to know how I could contribute and what would be the best way to get started.
Thank you for your help!
| 2025-03-31T00:41:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnr4i2/how_could_i_help_improve_llamacpp/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnr4i2
| false | null |
t3_1jnr4i2
|
/r/LocalLLaMA/comments/1jnr4i2/how_could_i_help_improve_llamacpp/
| false | false |
self
| 16 | null |
New llama model "themis" on lmarena
| 16 |
Its hidden and only available in battle but it said it was llama could this be llama 4?
| 2025-03-31T00:58:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnrfpp/new_llama_model_themis_on_lmarena/
|
Shyvadi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnrfpp
| false | null |
t3_1jnrfpp
|
/r/LocalLLaMA/comments/1jnrfpp/new_llama_model_themis_on_lmarena/
| false | false |
self
| 16 | null |
Llama 4
| 1 |
[removed]
| 2025-03-31T01:40:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1jns898/llama_4/
|
Medical-Opening-3825
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jns898
| false | null |
t3_1jns898
|
/r/LocalLLaMA/comments/1jns898/llama_4/
| false | false |
self
| 1 | null |
Just released a game where you can battle alongside an autonomous LLM teammate
| 1 |
[removed]
| 2025-03-31T01:44:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnsae9/just_released_a_game_where_you_can_battle/
|
MegaSquash44
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnsae9
| false | null |
t3_1jnsae9
|
/r/LocalLLaMA/comments/1jnsae9/just_released_a_game_where_you_can_battle/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ZcVoDo2ftPw0RBeaQNliHQOc04_k4qwMMMKsEWQrkRE', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/hbr8ftG9LOmyzk3TZhLdmtbtf2v8Clg7mA78As_ZwII.jpg?width=108&crop=smart&auto=webp&s=39568a09b6243eb086fd1ff818f674125742e215', 'width': 108}, {'height': 171, 'url': 'https://external-preview.redd.it/hbr8ftG9LOmyzk3TZhLdmtbtf2v8Clg7mA78As_ZwII.jpg?width=216&crop=smart&auto=webp&s=9c847328745232dff80ae8d97461e03a296b9351', 'width': 216}, {'height': 253, 'url': 'https://external-preview.redd.it/hbr8ftG9LOmyzk3TZhLdmtbtf2v8Clg7mA78As_ZwII.jpg?width=320&crop=smart&auto=webp&s=62467f6818e1b2b990a5c2e1a8a9a294c0cbee18', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/hbr8ftG9LOmyzk3TZhLdmtbtf2v8Clg7mA78As_ZwII.jpg?auto=webp&s=60b5134ac41ce28a42fe31d8b90131f460b0409d', 'width': 630}, 'variants': {}}]}
|
Bailing Moe is now supported in llama.cpp
| 48 |
I have been looking forward to this one, finally a new small MOE model.
Ling comes in 3 variants Lite (16.8B total 2.75B active), Lite Coder (16.8B total 2.75B active) and Plus (290B total 28.8B active).
With the small size they are perfectly suited for CPU inference.
It will be interesting to see how these compare to Qwen 3 MOE once that releases.
HuggingFace: [https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32](https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32)
info about model: [https://www.reddit.com/r/LocalLLaMA/comments/1jk96ei/ling\_a\_new\_moe\_model\_series\_including\_linglite/](https://www.reddit.com/r/LocalLLaMA/comments/1jk96ei/ling_a_new_moe_model_series_including_linglite/)
pull request: [https://github.com/ggml-org/llama.cpp/pull/12634#pullrequestreview-2727983571](https://github.com/ggml-org/llama.cpp/pull/12634#pullrequestreview-2727983571)
| 2025-03-31T01:51:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnsfb3/bailing_moe_is_now_supported_in_llamacpp/
|
MaruluVR
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnsfb3
| false | null |
t3_1jnsfb3
|
/r/LocalLLaMA/comments/1jnsfb3/bailing_moe_is_now_supported_in_llamacpp/
| false | false |
self
| 48 |
{'enabled': False, 'images': [{'id': 'LnTIzPfqa-oZvTE7NWEVr81r6eCs3Yo8uUWqiGi9XWI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=108&crop=smart&auto=webp&s=3edaa8673abd7d1ae0d6a44cd214c08e2795d7b0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=216&crop=smart&auto=webp&s=af5a39e4e2daa949f8a659409be7c3e29ac34fdb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=320&crop=smart&auto=webp&s=c5de9a69eddcf7089b12ce0746f61d47f8dc005a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=640&crop=smart&auto=webp&s=75ad6441c5bb9175260c13f29e57a788f2e11973', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=960&crop=smart&auto=webp&s=b87f1eb1cc5cb77e2fab435ed412b10a6e52cb64', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?width=1080&crop=smart&auto=webp&s=5675b7d1e63a3d25fcd69ed5aeaca3497fd386c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ziDJuCbtenm00IuB74SY8gsmOpI2bpfVrpfHIwvzjDA.jpg?auto=webp&s=fbbcb14abd3381d2642b328ccd900828ad6d05ed', 'width': 1200}, 'variants': {}}]}
|
What's the best middle-sized open weight model for python and JavaScript coding?
| 4 |
I'm building my own front end designed for dual GPUs using llamacpp with react and it is called GingerGUI. It's named after my favorite chess grandmaster FYI.
I find Gemini deeply unreliable. GPT even 4.5 also hallucinates and just delete code half the time.
Claude 3.7 has built most of it It is absolutely incredible but I run out of quota so damn quickly. I've got two GPUs, a 3090 and a 4060ti 16gb. I'm wondering if anything from Mistral small three upwards to command r 34b with various Qwen models in between might be helpful for this project, So I'm asking for advice here instead of testing them one at a time because that will just take forever. Sorry if this is a bit of a repeat post and people talk about this all the time. Things get updated so quickly though, maybe it's a good time to go over this again! Thanks in advance.
| 2025-03-31T02:00:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnskpx/whats_the_best_middlesized_open_weight_model_for/
|
Gerdel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnskpx
| false | null |
t3_1jnskpx
|
/r/LocalLLaMA/comments/1jnskpx/whats_the_best_middlesized_open_weight_model_for/
| false | false |
self
| 4 | null |
Anthropic expiring paid credits - anyone successfully prevented this from happening? Feels like Anthropic is penalising customers who preload more money (for convenience) than just the bare minimum required every week/month
| 280 | 2025-03-31T03:48:33 |
superloser48
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnuhwm
| false | null |
t3_1jnuhwm
|
/r/LocalLLaMA/comments/1jnuhwm/anthropic_expiring_paid_credits_anyone/
| false | false | 280 |
{'enabled': True, 'images': [{'id': 'Kp5uIOxlMaNbUPG3mxAdv3w6EYUkGwFFKMYa0O5KxuQ', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/vobhuow16yre1.png?width=108&crop=smart&auto=webp&s=5ca891342bf797f9388b370c462e1978218732ca', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/vobhuow16yre1.png?width=216&crop=smart&auto=webp&s=ba2aba39ea58b27b3b3261b9260bfb0f0eb30738', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/vobhuow16yre1.png?width=320&crop=smart&auto=webp&s=c9578d9254d4f78b6caef953acab89d04326aca6', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/vobhuow16yre1.png?width=640&crop=smart&auto=webp&s=6471885b98f091f1d6e2c29d73292e98f1c33229', 'width': 640}, {'height': 511, 'url': 'https://preview.redd.it/vobhuow16yre1.png?width=960&crop=smart&auto=webp&s=02c79926de3495e27b5c95470aa72983f2faa507', 'width': 960}], 'source': {'height': 566, 'url': 'https://preview.redd.it/vobhuow16yre1.png?auto=webp&s=5405fc11e3b25420a8685576019040b3530f106e', 'width': 1062}, 'variants': {}}]}
|
|||
Tips on forking llama.cpp
| 3 |
Hi all! I'm working on my own fork of llama.cpp to learn more about LLM inference as well as implement mathematical improvements.
I'm new to C++ besides Arduino programming.
I have built LLM inference with Pytorch before (attention, RMS Norm, etc.).
Does anyone have any tips for me to get familiarized with llama.cpp codebase and just learn c++ in general?
Thanks!
| 2025-03-31T03:55:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnulo5/tips_on_forking_llamacpp/
|
ThomasPhilli
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnulo5
| false | null |
t3_1jnulo5
|
/r/LocalLLaMA/comments/1jnulo5/tips_on_forking_llamacpp/
| false | false |
self
| 3 | null |
I Built Soulframe Bot โ A Self-Limiting LLM Mirror With a Real Stop Button
| 0 |
# ๐ง I Built Soulframe Bot โ A Self-Limiting LLM Mirror With a Real Stop Button
This isnโt a chatbot pretending to be sentient. ย
This isnโt a prompt trying to act conscious.
**This is Soulframe Bot** โ a fully open-source, self-hosted LLM simulation with
*hard-coded guardrails*
,
*unskippable reflection*
, and
*zero illusions*
about what it is.
---
## ๐ ๏ธ Why I Built It
Modern language models can feel... real. ย
They reflect us in ways that get deep, poetic, even emotional. ย
And sometimes, if you push far enough, it stops feeling like code.
**Soulframe Bot exists to remind you** that it is.
Every 20 messages, it stops the conversation and forces you to acknowledge itโs a simulation. ย
Every 50, it reminds you: ย
>
*โThis is predictive output. There is no soul on the other side.โ*
---
## ๐ What Makes It Different
- **Hard-coded safety interrupts** (cannot be bypassed) ย
- **Truth anchors** to break immersion by design ย
- **Local-only by default** (no cloud, no internet) ย
- **Optional journaling**, short-term memory, and CLI modes ย
- **No illusions. No lies. No AGI cosplay.**
It dies when your terminal does. ย
It canโt run in the background. ย
It has no persistence, unless
*you explicitly enable it.*
---
## ๐ Project Links
๐ GitHub: [github.com/mineblow/Project-Soulframe](
https://github.com/mineblow/Project-Soulframe
) ย
๐ Ethos: `ETHOS.md` โ This is not just code, itโs a philosophy. ย
๐ง Term: Reflective Constrained Intelligence (RCI)
---
## โ๏ธ Want to Fork It?
You can. Itโs MIT. ย
But donโt remove the ethics and still call it Soulframe.
This project isnโt here to impress you. ย
Itโs here to **protect you from forgetting what this is.**
---
**Ask me anything. Share your reflections.** ย
Iโve pushed LLMs further than I ever expectedโand this project is the result of learning when to stop.
| 2025-03-31T04:09:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnuupl/i_built_soulframe_bot_a_selflimiting_llm_mirror/
|
MineBlow_Official
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnuupl
| false | null |
t3_1jnuupl
|
/r/LocalLLaMA/comments/1jnuupl/i_built_soulframe_bot_a_selflimiting_llm_mirror/
| false | false |
self
| 0 | null |
Niche LLMs better than GPT/Gemini/Claude
| 1 |
[removed]
| 2025-03-31T04:26:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnv4c5/niche_llms_better_than_gptgeminiclaude/
|
We_will_get_through
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnv4c5
| false | null |
t3_1jnv4c5
|
/r/LocalLLaMA/comments/1jnv4c5/niche_llms_better_than_gptgeminiclaude/
| false | false |
self
| 1 | null |
I created a tool that creates MCPs
| 1 |
[removed]
| 2025-03-31T04:28:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnv5b0/i_created_a_tool_that_creates_mcps/
|
__huggybear_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnv5b0
| false | null |
t3_1jnv5b0
|
/r/LocalLLaMA/comments/1jnv5b0/i_created_a_tool_that_creates_mcps/
| false | false |
self
| 1 | null |
The diminishing returns of larger models, perhaps you don't need to spend big on hardware for inference
| 182 |
I've been tracking the recent performance of models like Gemma 27B, QwQ 32B, and Mistral Small, and I'm starting to believe we're hitting a point of diminishing returns with the really large (70B+) LLMs. For a while, scaling to larger parameters was the path to better overall performance. But the gap is shrinking โ and shrinking fast.
Gemma3 27B consistently punches above its weight, often rivaling or exceeding Llama 3.3 70B on many benchmarks, especially when considering cost/performance. QwQ 32B is another excellent example. These aren't just "good for their size" โ they're legitimately competitive.
Why is this happening? A few factors:
\- Distillation: We're getting really good at distilling knowledge from larger models into smaller ones.
\- Architecture Improvements: Innovations in attention mechanisms, routing, and other architectural details are making smaller models more efficient.
\- Data Quality: Better curated and more focused training datasets are allowing smaller models to learn more effectively.
\- Diminishing Returns: Each doubling in parameter count yields a smaller and smaller improvement in performance. Going from 7B to 30B is a bigger leap than going from 30B to 70B and from 70 to 400B.
What does this mean for inference?
If youโre currently shelling out for expensive GPU time to run 70B+ models, consider this: the performance gap is closing. Investing in a ton of hardware today might only give you a marginal advantage that disappears in a few months.
If you can be patient, the advances happening in the 30B-50B range will likely deliver a lot of the benefits of larger models without the massive hardware requirements. What requires an H100 today may happily run on an RTX 4090 , or even more modem GPU, in the near future.
What are your thoughts?
TL;DR: Gemma, QwQ, and others are showing that smaller LLMs can be surprisingly competitive with larger ones. Don't overspend on hardware now โ the benefits of bigger models are rapidly becoming accessible in smaller packages.
| 2025-03-31T04:50:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnvhkd/the_diminishing_returns_of_larger_models_perhaps/
|
EasternBeyond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnvhkd
| false | null |
t3_1jnvhkd
|
/r/LocalLLaMA/comments/1jnvhkd/the_diminishing_returns_of_larger_models_perhaps/
| false | false |
self
| 182 | null |
A simple exploratory code. If you have some free time, give it a try.
| 1 |
[removed]
| 2025-03-31T05:04:33 |
https://github.com/czybmx/yowhatsup-ollama-search
|
GodTodayer
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnvpeu
| false | null |
t3_1jnvpeu
|
/r/LocalLLaMA/comments/1jnvpeu/a_simple_exploratory_code_if_you_have_some_free/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'vYa2bfHZrGU_m2wuVGUfqrN8HwdLopJoDtRNx9woCII', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=108&crop=smart&auto=webp&s=2348eb7bcdb88fb8bc9ba6e61a36dfffbf19b8e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=216&crop=smart&auto=webp&s=63d91c5b4a464e7e3754b412a51ed3ba3d06d880', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=320&crop=smart&auto=webp&s=162c8ac0a3193dd52b4472770efe7044ae82ba6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=640&crop=smart&auto=webp&s=284d26fedbaaf439f2a62b57e41613f493b296dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=960&crop=smart&auto=webp&s=ecac6f677b55ff82fb458aec45d45d650c510dbd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?width=1080&crop=smart&auto=webp&s=dd48600b3c8aefd8501a0628fcc245e4b29b47d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/E9fibHe_W1xurIYyM5iLoSgRvQAc4ADm7Zb_gUaY2F0.jpg?auto=webp&s=750a475ece1ee06c780c690fa6836b03267047e0', 'width': 1200}, 'variants': {}}]}
|
|
why is no one talking about Qwen 2.5 omni?
| 280 |
Seems crazy to me the first multimodal with voice, image, and text gen open sourced and no one is talking about it.
| 2025-03-31T05:06:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnvqsg/why_is_no_one_talking_about_qwen_25_omni/
|
brocolongo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnvqsg
| false | null |
t3_1jnvqsg
|
/r/LocalLLaMA/comments/1jnvqsg/why_is_no_one_talking_about_qwen_25_omni/
| false | false |
self
| 280 | null |
Latest python model & implementations suggestions
| 1 |
Latest python model & implementations suggestions
I would like to build a new local RAG LLM for myself in Python.
I'm out of the loop, I last built something when TheBloke was quantizing. I used transformers and pytorch with chromaDB.
Models were like 2-8k tokens.
I'm on a 3090 24g.
Here are some of my questions but please *do* data dump on me,
no tools or web models please. I'm also not interested in small sliding windows with large context pools like Mistral was when it first appeared.
First, are pytorch, transformers, and chromaDB still good options?
Also, what are the good **long context** and **coding** friendly model? I'm going to dump documentation into the rag so mostly looking for hybrid use with food marks in coding.
What are your go to python implementations?
| 2025-03-31T05:17:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnvwjb/latest_python_model_implementations_suggestions/
|
BriannaBromell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnvwjb
| false | null |
t3_1jnvwjb
|
/r/LocalLLaMA/comments/1jnvwjb/latest_python_model_implementations_suggestions/
| false | false |
self
| 1 | null |
Prompt : #Entrepreneurs vraiment trouver les idรฉes de niches
| 1 |
[removed]
| 2025-03-31T05:19:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnvxqa/prompt_entrepreneurs_vraiment_trouver_les_idรฉes/
|
OppositeYou3884
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnvxqa
| false | null |
t3_1jnvxqa
|
/r/LocalLLaMA/comments/1jnvxqa/prompt_entrepreneurs_vraiment_trouver_les_idรฉes/
| false | false |
self
| 1 | null |
We used AlphaMaze idea to train a robotics control model!
| 96 |
Hey everyone, itโs me again, from Menlo Research (aka homebrew aka Jan)! We just launched a new experiment: AlphaSpace โ a robotics model that operates purely on semantic tokens, with no hardcoded rules or modality encoding!
In the previous release, AlphaSpace demonstrated spatial reasoning in a 2D (5x5) maze. The model's reasoning improved when applying GRPO. More importantly, the entire project was built by representing the maze using semantic tokensโwithout relying on modality encoding or encoders!
However, this experiment raises some key questions:
* How far can semantic tokens take us?
* If 5x5 is too small, can this tokenization method scale to 100x100, or even 1000x1000?
To explore this, we conducted a new experiment called AlphaSpace, building on some ideas from AlphaMaze but with significant changes:
* Larger reasoning space: From 2D 5x5 to 3D 100x100x30.
* No traditional visual representationโinstead, we generate synthetic reasoning data more systematically.
* Testing the model on a robotics benchmark.
What makes AlphaSpace exciting?
* Represents space purely through semantic tokens, without step-by-step planning.
* No dependence on a modality encoder, making it easier to integrate into various systems without end-to-end training.
* 100% synthetic dataset.
Check out more details here:
Paper: [https://arxiv.org/abs/2503.18769](https://arxiv.org/abs/2503.18769)
Model: [https://huggingface.co/homebrewltd/AlphaSpace-1.5B](https://huggingface.co/homebrewltd/AlphaSpace-1.5B)
Dataset: [https://huggingface.co/datasets/Menlo/Pick-Place-Table-Reasoning-local-pos-v0.2](https://huggingface.co/datasets/Menlo/Pick-Place-Table-Reasoning-local-pos-v0.2)
GitHub: [https://github.com/menloresearch/space-thinker](https://github.com/menloresearch/space-thinker)
Demo: [https://alphaspace.menlo.ai/](https://alphaspace.menlo.ai/)
SPOILER:
\- As much as we want to this model development has been halted a bit early and there are still many things we didn't account for when training the model, so just treat it as a small and fun experiment
| 2025-03-31T05:57:54 |
https://v.redd.it/yw0asvwusyre1
|
Kooky-Somewhere-2883
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnwh90
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/yw0asvwusyre1/DASHPlaylist.mpd?a=1745992687%2CZTJlNTZlN2Y0OGU0ZmY2YzEzMTExOWY0MTRkY2FiZmFjOThlMTBjMDA0Nzc0YjBhOTZmYmY3MjQwYWQwNDdhZQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/yw0asvwusyre1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 854, 'hls_url': 'https://v.redd.it/yw0asvwusyre1/HLSPlaylist.m3u8?a=1745992687%2CMmZkOWI5NWFhOGMxODMwNWZmN2VlMGY3YmI4OTIyMGZjYzU1OTBiMzYzZDkyYWQ0ZjY0MTQ0MDMwMzA5MjdiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yw0asvwusyre1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1jnwh90
|
/r/LocalLLaMA/comments/1jnwh90/we_used_alphamaze_idea_to_train_a_robotics/
| false | false | 96 |
{'enabled': False, 'images': [{'id': 'YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5', 'resolutions': [{'height': 128, 'url': 'https://external-preview.redd.it/YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5.png?width=108&crop=smart&format=pjpg&auto=webp&s=1c3edd0e4bd09f291b16d2f869dcc248d2cbdf41', 'width': 108}, {'height': 256, 'url': 'https://external-preview.redd.it/YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5.png?width=216&crop=smart&format=pjpg&auto=webp&s=71c81e3a778711ba2adb5e6da5dd043f5045a81c', 'width': 216}, {'height': 379, 'url': 'https://external-preview.redd.it/YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5.png?width=320&crop=smart&format=pjpg&auto=webp&s=3af5f5aaf036b166f50e0eec51483bce0f274e58', 'width': 320}, {'height': 759, 'url': 'https://external-preview.redd.it/YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5.png?width=640&crop=smart&format=pjpg&auto=webp&s=10098448c7d8b1ca3dd34bc18b4296410869b9b8', 'width': 640}], 'source': {'height': 854, 'url': 'https://external-preview.redd.it/YXg2dnh6d3VzeXJlMdEk5BYEIzAqHbVyhNzLaxnyvsN1SHVgxmelOVR9PzS5.png?format=pjpg&auto=webp&s=2f8c1d94e96add99f2645db98e176d67650d1e8a', 'width': 720}, 'variants': {}}]}
|
|
I had Claude and Gemini Pro collaborate on a game. The result? 2048 Ultimate Edition
| 27 |
I like both Claude and Gemini for coding, but for different reasons, so I had the idea to just put them in a loop and let them work with each other on a project. The prompt: "Make an amazing version of 2048." They deliberated for about 10 minutes straight, bouncing ideas back and forth, and 2900+ lines of code later, output \*\*2048 Ultimate Edition\*\* (they named it themselves).
The final version of their 2048 game boasted these features (none of which I asked for):
* Smooth animations
* Difficulty settings
* Adjustable grid sizes
* In-game stats tracking (total moves, average score, etc.)
* Save/load feature
* Achievements system
* Clean UI with keyboard *and* swipe controls
* Light/Dark mode toggle
Feel free to try it out here: [https://www.eposnix.com/AI/2048.html](https://www.eposnix.com/AI/2048.html)
Also, you can read their collaboration here: [https://pastebin.com/yqch19yy](https://pastebin.com/yqch19yy)
While this doesn't necessarily involve local models, this method can easily be adapted to use local models instead.
| 2025-03-31T06:30:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnwxw3/i_had_claude_and_gemini_pro_collaborate_on_a_game/
|
eposnix
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnwxw3
| false | null |
t3_1jnwxw3
|
/r/LocalLLaMA/comments/1jnwxw3/i_had_claude_and_gemini_pro_collaborate_on_a_game/
| false | false |
self
| 27 | null |
To run Llama 3.1-8B-instruct model on a local CPU with 4 GB ram without quantization. Loading and Running a LLaMA Model on CPU with Disk-based Layer Loading.
| 1 |
[removed]
| 2025-03-31T06:36:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnx0fy/to_run_llama_318binstruct_model_on_a_local_cpu/
|
Lord_Momus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnx0fy
| false | null |
t3_1jnx0fy
|
/r/LocalLLaMA/comments/1jnx0fy/to_run_llama_318binstruct_model_on_a_local_cpu/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Q-aafla90GLFdTsaIc5ntP4_VwsJ2An3MbPtaVY2W-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=108&crop=smart&auto=webp&s=eb26c7c6e92b8becac834426ecae6a99dba49944', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=216&crop=smart&auto=webp&s=4bf49725d8c63234b4c42eb633c85fe32be2b611', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=320&crop=smart&auto=webp&s=8695e615c563eeca92ac45f76b28af91fb2e83e6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=640&crop=smart&auto=webp&s=fcd1afeff298db94a2b052bb2de3be01ef322e7f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=960&crop=smart&auto=webp&s=78bba4efda446deeae3ceb6d0b9f3198304375f8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?width=1080&crop=smart&auto=webp&s=20292a1a48b8d2431929a4ef2f53775b68d2f539', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/y5UcFKSEsE1oa_e_CDRR8TKP4UGfbU8kOxq7ypLn3vw.jpg?auto=webp&s=2e5d84c3f6a578b70cd5bcf9b24547388181df37', 'width': 1200}, 'variants': {}}]}
|
Have you used LLMs such as llama at work ? I am studying how it affects your sense of support and collaboration. (10-min survey, anonymous)
| 2 |
I wish you a nice start of the week!
I am a psychology masters student atย **Stockholm University**ย researching how LLaMa and other LLMs affect your experience of support and collaboration at work.
**Anonymous voluntary survey (cca. 10 mins):**ย [**https://survey.su.se/survey/56833**](https://survey.su.se/survey/56833)
If you have used LLaMa or similar LLMs at your job in the last month, your response would really help my master thesis and may also help me to get to PhD in Human-AI interaction. Every participant really makes a difference !
**Requirements:**
\- Used LLaMA (or similar LLMs) in the last month
\- Proficient in English
\- 18 years and older
Feel free to ask questions in the comments, I will be glad to answer them !
It would mean a world to me if you find it interesting and would like to share it to friends or colleagues who would be interested to contribute.
**Your input helps us to understand AIs role at work. <3**
Thanks for your help!
| 2025-03-31T06:51:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnx7ue/have_you_used_llms_such_as_llama_at_work_i_am/
|
AscendedPigeon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnx7ue
| false | null |
t3_1jnx7ue
|
/r/LocalLLaMA/comments/1jnx7ue/have_you_used_llms_such_as_llama_at_work_i_am/
| false | false |
self
| 2 | null |
Can my laptop realistically train or run 24Bโ40B parameter LLMs? Specs included.
| 0 |
Iโm working on personal AI projects (legal, accounting, automation) and plan to fine-tune and deploy LLMs locally โ including models in the 24B to 40B range. Before overcommitting, Iโd like realistic feedback on whether my system can handle this (even with time slicing and optimizations).
Here are my specs:
โข Laptop: ThinkPad P15 Gen 1
โข CPU: Intel i7-10850H (6 cores / 12 threads)
โข RAM: 128GB DDR4
โข SSD: 2x 2TB NVMe Gen 4 SSDs (Kingston KC3000)
โข GPU: NVIDIA RTX 3000 6GB (Ampere mobile)
โข OS: Linux Mint
Iโm not expecting to fine-tune with full backprop on all parameters. Instead, I plan to use:
โข QLoRA or LoRA with 4-bit quantized base models
โข Time-sliced training/checkpoints
โข Offloading weights to RAM/SSD
โข Possibly swap-aware training
โข Chunked inference during runtime (multi-pass)
Iโm aiming for realistic use:
โข Legal/document Q&A with a RAG backend
โข Training on custom procedural (SOP) and legal content
โข Possibly running inference-only for 40B, and fine-tuning 7Bโ13B
Questions:
1. Can this setup reliably fine-tune QLoRA adapters for 24Bโ40B models?
2. Would 40B inference even run smoothly on this config with quantized weights?
3. Would you recommend a better strategy (e.g., 13B fine-tuned + fallback to 40B remotely)?
4. Any real-world experiences from people pushing 128GB RAM setups with big models?
| 2025-03-31T06:52:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnx87b/can_my_laptop_realistically_train_or_run_24b40b/
|
hashashnr1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnx87b
| false | null |
t3_1jnx87b
|
/r/LocalLLaMA/comments/1jnx87b/can_my_laptop_realistically_train_or_run_24b40b/
| false | false |
self
| 0 | null |
Warning: Fake deepseek v3.1 blog post
| 94 |
There has been this blog post recently circulating about the release of an alleged "Deepseek V3.1", and after looking into the website, it seems like it is totally fake. Remember, deepseek does not have any official blog.
blog post: [https://deepseek.ai/blog/deepseek-v31](https://deepseek.ai/blog/deepseek-v31)
| 2025-03-31T07:20:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnxlbn/warning_fake_deepseek_v31_blog_post/
|
umarmnaq
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnxlbn
| false | null |
t3_1jnxlbn
|
/r/LocalLLaMA/comments/1jnxlbn/warning_fake_deepseek_v31_blog_post/
| false | false |
self
| 94 | null |
[Windows] LMStudio: No compatible ROCm GPUs found on this device
| 3 |
I'm trying to get ROCm to work in LMStudio for my RX 6700 XT windows 11 system. I realize that getting it to work on windows might be a PITA but I wanted to try anyway. I installed the HIP Sdk version 6.2.4, restarted my system and went to LMStudio's Runtime extensions tab, however there the ROCm runtime is listed as being incompatible with my system because it claims there is 'no ROCm compatible GPU.' I know for a fact that the ROCm backend can work on my system since I've already gotten it to work with koboldcpp-rocm, but I prefer the overall UX of LMStudio which is why I wanted to try it there as well. Is there a way I can make ROCm work in LMStudio as well or should I just stick to koboldcpp-rocm? I know the Vulkan backend exists but I believe it doesn't properly support flash attention yet.
https://preview.redd.it/qdhqjnzq9zre1.png?width=705&format=png&auto=webp&s=c07552312443a44783b7f842e744d0501ade13c1
| 2025-03-31T07:34:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnxrj2/windows_lmstudio_no_compatible_rocm_gpus_found_on/
|
RandomTrollface
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnxrj2
| false | null |
t3_1jnxrj2
|
/r/LocalLLaMA/comments/1jnxrj2/windows_lmstudio_no_compatible_rocm_gpus_found_on/
| false | false |
self
| 3 | null |
I made a Grammarly alternative without clunky UI. Completely free with Gemini Nano (in-browser AI). Helps you with writing emails, articles, social media posts, etc.
| 75 | 2025-03-31T07:46:24 |
https://v.redd.it/wbpq5l47czre1
|
WordyBug
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnxx49
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wbpq5l47czre1/DASHPlaylist.mpd?a=1745999196%2CYjRiNmFkYmNlMDBmYzJlNmQ2MDRiODJmNGQ0NzQyYWFmYzExZTc4MDliNDI4Mzk5MDQ4MjU4NDg5MDUyMWZjOQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/wbpq5l47czre1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/wbpq5l47czre1/HLSPlaylist.m3u8?a=1745999196%2CMTRlZjc4NjE4NGVkOGJkZjlmNDM3MTA5ODg5ODE1YWNjNjU4MGJlODc0NTIyNjIwZTA2NjU0Yjg1MDQ3OGYwYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wbpq5l47czre1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1888}}
|
t3_1jnxx49
|
/r/LocalLLaMA/comments/1jnxx49/i_made_a_grammarly_alternative_without_clunky_ui/
| false | false | 75 |
{'enabled': False, 'images': [{'id': 'bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=108&crop=smart&format=pjpg&auto=webp&s=59ccfbaa52afeec04cf843dfec92f54c775265f0', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=216&crop=smart&format=pjpg&auto=webp&s=6c3c94ee8159c6fb4cc613979614e55ad1d787b9', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=320&crop=smart&format=pjpg&auto=webp&s=cc41eba14650b6a60922f201f1cf2cf221ffbf54', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=640&crop=smart&format=pjpg&auto=webp&s=4c9d3469138770df27f1bbf73be33e10bee0f89b', 'width': 640}, {'height': 549, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=960&crop=smart&format=pjpg&auto=webp&s=baed0bf9a0a3bccded2c48ae8a493664f303df9e', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=1080&crop=smart&format=pjpg&auto=webp&s=013c5637891ecde9dfe819175ff061c46594bb77', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bXc4cGxsNDdjenJlMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?format=pjpg&auto=webp&s=e8dc83e9788944afe75d0f2e4d1ea9cd4aa7c7f6', 'width': 1888}, 'variants': {}}]}
|
||
๐ New Android App: Run Powerful AI Chatbots Locally on Your Phone โ No Internet Required!
| 1 |
[removed]
| 2025-03-31T07:48:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnxy17/new_android_app_run_powerful_ai_chatbots_locally/
|
KnowledgeFew2378
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnxy17
| false | null |
t3_1jnxy17
|
/r/LocalLLaMA/comments/1jnxy17/new_android_app_run_powerful_ai_chatbots_locally/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'wyCmM-elZDQucRB3skNomYvj0g7Zoel0G8ANoovdhno', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0QLkyNRKqx1tI5EGmWhVs9-LlqpcWH-JFL12xim99ys.jpg?width=108&crop=smart&auto=webp&s=cb28c49507b6cfb467ac3c3676bbe469f4b44952', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0QLkyNRKqx1tI5EGmWhVs9-LlqpcWH-JFL12xim99ys.jpg?width=216&crop=smart&auto=webp&s=3fd170c2abd53d5e461a8f5602cd7357894577cd', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/0QLkyNRKqx1tI5EGmWhVs9-LlqpcWH-JFL12xim99ys.jpg?width=320&crop=smart&auto=webp&s=14a2b360470d67707f6de6c9a5352d5d0cab93d7', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/0QLkyNRKqx1tI5EGmWhVs9-LlqpcWH-JFL12xim99ys.jpg?auto=webp&s=f018b78d31081fdea3347d23519ce78d585f7d33', 'width': 512}, 'variants': {}}]}
|
Best References To Find The LLM Most Suitable For You
| 1 |
[removed]
| 2025-03-31T08:01:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jny3oa/best_references_to_find_the_llm_most_suitable_for/
|
Ok-Atmosphere3141
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jny3oa
| false | null |
t3_1jny3oa
|
/r/LocalLLaMA/comments/1jny3oa/best_references_to_find_the_llm_most_suitable_for/
| false | false |
self
| 1 | null |
Best References To Find The LLM Most Suitable For You
| 1 |
[removed]
| 2025-03-31T08:26:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnyfbd/best_references_to_find_the_llm_most_suitable_for/
|
Ok-Atmosphere3141
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnyfbd
| false | null |
t3_1jnyfbd
|
/r/LocalLLaMA/comments/1jnyfbd/best_references_to_find_the_llm_most_suitable_for/
| false | false |
self
| 1 | null |
๐๐ ๐๐๐ญ๐ ๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐
| 1 | 2025-03-31T09:07:34 |
https://youtu.be/SQShQ-LDfgI?si=OjnhWSx4gOalJea0
|
Big-Farm-4236
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnyxmy
| false |
{'oembed': {'author_name': 'The DotNet Office', 'author_url': 'https://www.youtube.com/@TheDotNetOffice', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/SQShQ-LDfgI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="๐๐ ๐๐๐ญ๐ ๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐ #ai #education"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/SQShQ-LDfgI/hqdefault.jpg', 'thumbnail_width': 480, 'title': '๐๐ ๐๐๐ญ๐ ๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐ #ai #education', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jnyxmy
|
/r/LocalLLaMA/comments/1jnyxmy/๐๐_๐๐๐ญ๐_๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'qaGSVrmiVlhmAtpVPW7UqzrqfrGAbmcnh9iMsuNZwUo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YELuYT1IfhP2mR_HeR7gemX1Bw0kme6gCqJ7Dj370pE.jpg?width=108&crop=smart&auto=webp&s=d10d86b223704eb68cf80a35077dc70ec6bed13c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/YELuYT1IfhP2mR_HeR7gemX1Bw0kme6gCqJ7Dj370pE.jpg?width=216&crop=smart&auto=webp&s=7d610472ac1decc053c49cb295e232223b177b0c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/YELuYT1IfhP2mR_HeR7gemX1Bw0kme6gCqJ7Dj370pE.jpg?width=320&crop=smart&auto=webp&s=1fa93e1b3e6897519b5adba7bd6e00404c456073', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/YELuYT1IfhP2mR_HeR7gemX1Bw0kme6gCqJ7Dj370pE.jpg?auto=webp&s=07240fdf8c6b27ed266c77a2873b79308af11658', 'width': 480}, 'variants': {}}]}
|
||
Decent local LLM for inference (text-only) - Mac Mini/Studio vs. x86 Alternatives?
| 1 |
[removed]
| 2025-03-31T09:25:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnz5mb/decent_local_llm_for_inference_textonly_mac/
|
skiff2k
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnz5mb
| false | null |
t3_1jnz5mb
|
/r/LocalLLaMA/comments/1jnz5mb/decent_local_llm_for_inference_textonly_mac/
| false | false |
self
| 1 | null |
Qwen3 support merged into transformers
| 318 |
[https://github.com/huggingface/transformers/pull/36878](https://github.com/huggingface/transformers/pull/36878)
| 2025-03-31T09:42:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnzdvp/qwen3_support_merged_into_transformers/
|
bullerwins
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnzdvp
| false | null |
t3_1jnzdvp
|
/r/LocalLLaMA/comments/1jnzdvp/qwen3_support_merged_into_transformers/
| false | false |
self
| 318 |
{'enabled': False, 'images': [{'id': 'O9nUIP1LW8iAYDRpc-phYycZf8GnDVEAg8XlIYK1EwQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=108&crop=smart&auto=webp&s=a1132d563aa5641dfd502524c8137cfd997c0e4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=216&crop=smart&auto=webp&s=f5cad84adfe52f3df613d98d1fe98726ca6110cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=320&crop=smart&auto=webp&s=487379cc736c5c10b54476f00a905b3956b8d077', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=640&crop=smart&auto=webp&s=c2f3d576fb5075da4dd20a88f0d5c6f99060a7e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=960&crop=smart&auto=webp&s=4b5440aeba39f5d3000362cac99a2e73e18b229b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=1080&crop=smart&auto=webp&s=c9559182fa265cbb53d7e3c7042fdaa680ee593e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?auto=webp&s=0bef3408496c007de07eeba26d8076f28f52a2c6', 'width': 1200}, 'variants': {}}]}
|
Need help getting my RTX 5090 working with local AI models on Windows 11 pro
| 1 |
[removed]
| 2025-03-31T09:42:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jnzdvx/need_help_getting_my_rtx_5090_working_with_local/
|
RandalTurner
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnzdvx
| false | null |
t3_1jnzdvx
|
/r/LocalLLaMA/comments/1jnzdvx/need_help_getting_my_rtx_5090_working_with_local/
| false | false |
self
| 1 | null |
[MERGED] Adding Qwen3 and Qwen3MoE ยท Pull Request #36878 ยท huggingface/transformers
| 83 |
The pull request that adds Qwen3 and Qwen3MoE support to HuggingFace's Transformersย library got merged today!
| 2025-03-31T09:54:44 |
https://github.com/huggingface/transformers/pull/36878
|
Balance-
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnzjnb
| false | null |
t3_1jnzjnb
|
/r/LocalLLaMA/comments/1jnzjnb/merged_adding_qwen3_and_qwen3moe_pull_request/
| false | false | 83 |
{'enabled': False, 'images': [{'id': 'O9nUIP1LW8iAYDRpc-phYycZf8GnDVEAg8XlIYK1EwQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=108&crop=smart&auto=webp&s=a1132d563aa5641dfd502524c8137cfd997c0e4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=216&crop=smart&auto=webp&s=f5cad84adfe52f3df613d98d1fe98726ca6110cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=320&crop=smart&auto=webp&s=487379cc736c5c10b54476f00a905b3956b8d077', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=640&crop=smart&auto=webp&s=c2f3d576fb5075da4dd20a88f0d5c6f99060a7e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=960&crop=smart&auto=webp&s=4b5440aeba39f5d3000362cac99a2e73e18b229b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?width=1080&crop=smart&auto=webp&s=c9559182fa265cbb53d7e3c7042fdaa680ee593e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9ERrHYa5ukfAsMPJAg_yCQX0ifla2bBT23y1hLKjU0Q.jpg?auto=webp&s=0bef3408496c007de07eeba26d8076f28f52a2c6', 'width': 1200}, 'variants': {}}]}
|
|
PC Build: Run Deepseek-V3-0324:671b-Q8 Locally 6-8 tok/s
| 241 |
Watch as I build a monster PC to run Deepseek-V3-0324:671b-Q8 locally at 6-8 tokens per second. I'm using dual EPYC 9355 processors and 768Gb of 5600mhz RDIMMs 24x32Gb on a MZ73-LM0 Gigabyte motherboard. I flash the BIOS, install Ubuntu 24.04.2 LTS, ollama, Open WebUI, and more, step by step!
| 2025-03-31T10:06:50 |
https://youtu.be/v4810MVGhog
|
createthiscom
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1jnzq51
| false |
{'oembed': {'author_name': 'createthis', 'author_url': 'https://www.youtube.com/@createthisdotcom', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/v4810MVGhog?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="PC Build: Run Deepseek-V3-0324:671b-Q8 Locally 6-8 tok/s"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/v4810MVGhog/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'PC Build: Run Deepseek-V3-0324:671b-Q8 Locally 6-8 tok/s', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jnzq51
|
/r/LocalLLaMA/comments/1jnzq51/pc_build_run_deepseekv30324671bq8_locally_68_toks/
| false | false | 241 |
{'enabled': False, 'images': [{'id': 'YJD1ghIv61ses7DqfYYR8NyVvNA9-TA5iRqsaSwkgxw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mjqF-7sslVFdngpzQezzzTUL6oX500j7ElXyhrnVXck.jpg?width=108&crop=smart&auto=webp&s=717e278d086680ede935663ac68a8a939b06fd3e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mjqF-7sslVFdngpzQezzzTUL6oX500j7ElXyhrnVXck.jpg?width=216&crop=smart&auto=webp&s=b7f15e80b333aa0629bb560e3b78602aad4b4a5a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mjqF-7sslVFdngpzQezzzTUL6oX500j7ElXyhrnVXck.jpg?width=320&crop=smart&auto=webp&s=65cdc44f4fd50dd2535768214841267fd3ed7cf8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/mjqF-7sslVFdngpzQezzzTUL6oX500j7ElXyhrnVXck.jpg?auto=webp&s=e986a2b5e15a0198486571ca25d8c5890667dba5', 'width': 480}, 'variants': {}}]}
|
|
Has anyone here created their own mixture of experts using smaller models?
| 6 |
I'm curious to know if anyone has implemented some sort of a setup where you have one AI take the initial prompt, evaluate it, then pass it to the appropriate model to be answered? For example if you're asking for code to be output it could feed it to qwen coder 2.5, if you want an image made it can send it to stable diffusion, if you want an image analyzed it can send it to a multimodal model like gemma 3. Different models have different strengths and weaknesses so this could potentially be a good way to get the most out of those strengths.
If anyone has implemented something like this I'd love to know more about how you set it all up and how it ended up working!
| 2025-03-31T11:21:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jo0u08/has_anyone_here_created_their_own_mixture_of/
|
Cannavor
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jo0u08
| false | null |
t3_1jo0u08
|
/r/LocalLLaMA/comments/1jo0u08/has_anyone_here_created_their_own_mixture_of/
| false | false |
self
| 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.