title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Finally can enable CUDA to run Deepseek 8b(uncensored) on Jetson Agx Xavier (32GB) 🎉🎉🎉
| 3 |
Download ollama from https://github.com/ollama/ollama/releases/tag/v0.6.5
| 2025-04-14T08:17:58 |
https://v.redd.it/fxqh2wlzerue1
|
Tombother
|
/r/LocalLLaMA/comments/1jytrqe/finally_can_enable_cuda_to_run_deepseek/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytrqe
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fxqh2wlzerue1/DASHPlaylist.mpd?a=1747340280%2CNTc2Nzg2ZTY0YzZlZWE0MmZhNzEzY2Y0OWRiMzVjNDNlNTQ2MTZhMzhmNmMzODQ1MjZhN2RjYWYxYWE3OTAwZQ%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/fxqh2wlzerue1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fxqh2wlzerue1/HLSPlaylist.m3u8?a=1747340280%2CNjU5MjAyM2RmMWY0NWRlYmJjOWMyMTBlZGUwMWFiMWZlYmUxZGM1OGZlN2VhYzY3NTlkOWFhYTFjOTc0OTdjNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fxqh2wlzerue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1jytrqe
|
/r/LocalLLaMA/comments/1jytrqe/finally_can_enable_cuda_to_run_deepseek/
| false | false | 3 |
{'enabled': False, 'images': [{'id': 'emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=108&crop=smart&format=pjpg&auto=webp&s=3121472935c297058ce6c7be5250366338b821bd', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=216&crop=smart&format=pjpg&auto=webp&s=b26e209c498ef584914a57cb0dc537649d77569a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee8678d592b9cd6248e2eadfd9873a5dde3f2698', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=640&crop=smart&format=pjpg&auto=webp&s=df6b96b141cd15e1ee0977c27c6640e3e466c319', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=960&crop=smart&format=pjpg&auto=webp&s=081643718ecff0207d9a1caaf7cd4227f76ac992', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?width=1080&crop=smart&format=pjpg&auto=webp&s=153b9b64d86892d98a2ed9c9fe9e60473d880486', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/emRrOHlxMnllcnVlMW_02XaAaGSvYjD8sEeM2MqCvt7bhR9xNHZYww7pVdwU.png?format=pjpg&auto=webp&s=c9c0dbaf7d0cd92839761363e8be802dba26dd7f', 'width': 1920}, 'variants': {}}]}
|
|
Exploring GenAI Model Comparisons
| 1 |
[removed]
| 2025-04-14T08:22:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytto1/exploring_genai_model_comparisons/
|
TrainingField9469
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytto1
| false | null |
t3_1jytto1
|
/r/LocalLLaMA/comments/1jytto1/exploring_genai_model_comparisons/
| false | false |
self
| 1 | null |
Power users needed: Looking for feedback for HumanFirst.ai's AI studio for context/prompt engineering
| 0 |
Hey everyone
I work for HumanFirst (www.humanfirst.ai) we are reaching out to AI professionals who might be interested in trying a new approach to prompt/context management in their AI work.
HumanFirst is an AI studio for power users and teams who are building complex and/or reusable prompts. It gives you more control and efficiency in building, testing, and managing your work.
We’re tackling where power users are getting stuck in other platforms:
Building and managing prompts with sufficient context
Managing reference data, documents, and few-shot examples with full control (no knowledge base confusion, no chat limits, no massive text walls)
Running prompts on unlimited inputs simultaneously
Testing & iterating on prompts used for automations & agents
We're offering free access to our beta version for AI and optional personalized onboarding. We're simply interested in getting the tool into the hands of people who work with AI daily. If you have thoughts after trying it, we'd certainly welcome hearing them, but that's entirely optional.
If you're curious and would like to give it a try, just visit [www.humanfirst.ai](http://www.humanfirst.ai)
| 2025-04-14T08:24:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytuj3/power_users_needed_looking_for_feedback_for/
|
Useful_Composer_6676
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytuj3
| false | null |
t3_1jytuj3
|
/r/LocalLLaMA/comments/1jytuj3/power_users_needed_looking_for_feedback_for/
| false | false |
self
| 0 | null |
Exploring GenAI Model Comparisons
| 1 |
[removed]
| 2025-04-14T08:24:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytuqd/exploring_genai_model_comparisons/
|
TrainingField9469
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytuqd
| false | null |
t3_1jytuqd
|
/r/LocalLLaMA/comments/1jytuqd/exploring_genai_model_comparisons/
| false | false |
self
| 1 | null |
Open Sourcing a framework to build SLMs for any regional language
| 7 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/jorc5k68grue1.png?width=1438&format=png&auto=webp&s=fcea88745cbcc03d289cd5f7d7ebd8cb82eaa008\n\nThis is our first major contribution towards building foundational LLM capacity for India. \n\nThe research paper associated with this work can be found here: [https://arxiv.org/pdf/2504.07989](https://arxiv.org/pdf/2504.07989)\n\nWe believe in open source 100% and have released a Github repository here: [https://github.com/VizuaraAI/Tiny-Stories-Regional](https://github.com/VizuaraAI/Tiny-Stories-Regional)\n\n**Anyone can use this repository to build a Small Language Model (SLM) for their language of choice.** \n\nHere is how we built these models: \n\n(1) We based our methodology on the TinyStories Paper which Microsoft released in 2023: [https://arxiv.org/abs/2305.07759](https://arxiv.org/abs/2305.07759)\n\n(2) We generated the datasets in regional languages. \n\n(3) We built a language model architecture from scratch for pre-training. \n\n(4) During inference, we evaluated the model creativity, completeness, fluency and grammar. \n\n(5) We used this framework as a proxy for comparing regional tokenizers.\n\nI feel the biggest takeaway from this work is that the framework we have outlined can be utilized by the community to create SLMs fro underrepresented, regional languages. " | 2025-04-14T08:25:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytv2q/open_sourcing_a_framework_to_build_slms_for_any/
|
OtherRaisin3426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytv2q
| false | null |
t3_1jytv2q
|
/r/LocalLLaMA/comments/1jytv2q/open_sourcing_a_framework_to_build_slms_for_any/
| false | false | 7 | null |
|
Exploring GenAI Model Comparisons
| 1 |
[removed]
| 2025-04-14T08:25:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytvaz/exploring_genai_model_comparisons/
|
TrainingField9469
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytvaz
| false | null |
t3_1jytvaz
|
/r/LocalLLaMA/comments/1jytvaz/exploring_genai_model_comparisons/
| false | false |
self
| 1 | null |
DeepSeek is about to open-source their inference engine
| 1,554 |
DeepSeek is about to open-source their inference engine, which is a modified version based on vLLM. Now, DeepSeek is preparing to contribute these modifications back to the community.
I really like the last sentence: 'with the goal of enabling the community to achieve state-of-the-art (SOTA) support from Day-0.'
Link: [https://github.com/deepseek-ai/open-infra-index/tree/main/OpenSourcing\_DeepSeek\_Inference\_Engine](https://github.com/deepseek-ai/open-infra-index/tree/main/OpenSourcing_DeepSeek_Inference_Engine)
| 2025-04-14T08:27:29 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytw62
| false | null |
t3_1jytw62
|
/r/LocalLLaMA/comments/1jytw62/deepseek_is_about_to_opensource_their_inference/
| false | false |
default
| 1,554 |
{'enabled': True, 'images': [{'id': '1am95yongrue1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/1am95yongrue1.png?width=108&crop=smart&auto=webp&s=01897b18383eab9dc0f915335600e8601a8a6c60', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/1am95yongrue1.png?width=216&crop=smart&auto=webp&s=f9ae71d7bfbf7927776067727b0cdc68a51de351', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/1am95yongrue1.png?width=320&crop=smart&auto=webp&s=86d381180a8116b873b9a813fc7f00109e37ca31', 'width': 320}, {'height': 544, 'url': 'https://preview.redd.it/1am95yongrue1.png?width=640&crop=smart&auto=webp&s=967ad74640babe443b3c9a2867547f568219bda6', 'width': 640}, {'height': 817, 'url': 'https://preview.redd.it/1am95yongrue1.png?width=960&crop=smart&auto=webp&s=fecc4dd6d79ffaa421c06947d120966a464c9194', 'width': 960}], 'source': {'height': 909, 'url': 'https://preview.redd.it/1am95yongrue1.png?auto=webp&s=0288c8e1cc316e7cb4556873a9b17f5e7a68cfb3', 'width': 1068}, 'variants': {}}]}
|
|
New to Running Local LLM, a question
| 0 |
Hi everyone, hope everyone is doing well.
I have a question about running LLM's locally.
Is there a big difference with the publicly available LLM's like Claude, ChatGPT, Deepseek, ...
In output?
If i run Gemma locally for coding tasks, does it work well?
How should i compare this?
question nr 2.
Which model should i use for image generation atm?
Thanks everyone, and have a nice day!
| 2025-04-14T08:30:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jytxgo/new_to_running_local_llm_a_question/
|
Siinxx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jytxgo
| false | null |
t3_1jytxgo
|
/r/LocalLLaMA/comments/1jytxgo/new_to_running_local_llm_a_question/
| false | false |
self
| 0 | null |
llama was so deep that now ex employee saying that we r not involved in that project
| 686 | 2025-04-14T08:36:06 |
Select_Dream634
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyu06v
| false | null |
t3_1jyu06v
|
/r/LocalLLaMA/comments/1jyu06v/llama_was_so_deep_that_now_ex_employee_saying/
| false | false |
default
| 686 |
{'enabled': True, 'images': [{'id': '49mfsia3irue1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=108&crop=smart&auto=webp&s=363666cd4f66cd2a93b25f9a3ce181ff0b873295', 'width': 108}, {'height': 269, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=216&crop=smart&auto=webp&s=a42cef44fc1e8e7ef1316ac08736ec5ff377e07b', 'width': 216}, {'height': 399, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=320&crop=smart&auto=webp&s=34d8d57d4f75848e4c17879b402c9e55eefe7e81', 'width': 320}, {'height': 798, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=640&crop=smart&auto=webp&s=b3266a093713e9cb503b3634a7a8b1f7fb0852f0', 'width': 640}, {'height': 1198, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=960&crop=smart&auto=webp&s=3118be474472aeee0b93c99c93bc6af01d7ec81b', 'width': 960}, {'height': 1348, 'url': 'https://preview.redd.it/49mfsia3irue1.png?width=1080&crop=smart&auto=webp&s=af74bd64573c04e7198c3d94e8a65b3854a70f5c', 'width': 1080}], 'source': {'height': 1348, 'url': 'https://preview.redd.it/49mfsia3irue1.png?auto=webp&s=05db2b32db1ef789d5d20eb6a92f10e3327650ab', 'width': 1080}, 'variants': {}}]}
|
||
Optimized models for horror stories creation/creepypasta?
| 2 |
[removed]
| 2025-04-14T09:02:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyucrs/optimized_models_for_horror_stories/
|
No-East956
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyucrs
| false | null |
t3_1jyucrs
|
/r/LocalLLaMA/comments/1jyucrs/optimized_models_for_horror_stories/
| false | false |
self
| 2 | null |
Optimized models for horror stories creation/creepypasta?
| 1 |
[removed]
| 2025-04-14T09:09:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyufx5/optimized_models_for_horror_stories/
|
Lanky_Grocery_511
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyufx5
| false | null |
t3_1jyufx5
|
/r/LocalLLaMA/comments/1jyufx5/optimized_models_for_horror_stories/
| false | false |
self
| 1 | null |
Guide of Realistic Sex Dolls Customization
| 1 |
[removed]
| 2025-04-14T09:19:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyul1u/guide_of_realistic_sex_dolls_customization/
|
Altruistic-League586
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyul1u
| false | null |
t3_1jyul1u
|
/r/LocalLLaMA/comments/1jyul1u/guide_of_realistic_sex_dolls_customization/
| false | false |
self
| 1 | null |
Finetuning llama 3- max seq length question
| 1 |
[removed]
| 2025-04-14T09:28:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyupkq/finetuning_llama_3_max_seq_length_question/
|
nicole111199
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyupkq
| false | null |
t3_1jyupkq
|
/r/LocalLLaMA/comments/1jyupkq/finetuning_llama_3_max_seq_length_question/
| false | false |
self
| 1 | null |
Llm for source code and log file analysis
| 0 |
Hello,
not a total noob here but I seem to miss something as I cannot really make friends with local llm's for my purposes yet.
Lately I tried to analyse source codes, log files - like asking verbal questions about them, etc. -, trying to extract well formed sql queries out of a big java project, asking questions about the sql queries etc.
First I struggled to find a fitting model which would do the job - kind of - on a notebook (ryzen 7, 40gb ram).
The results were very much of mixed quality, sometime smaller models were more accurate/helpful than bigger ones or even trimmed to code analysis. They were very slow.
I tried to optimize my prompts. There might be still some more potential in enhancing them but it was only little help.
Bigger models are obviously slow, i tried to process my data in chunks not to exceed context limitations. Integration in python was really easy and helpful.
I still dont get good results consistently, a lot of experimenting, a lot of time is going into this for me.
I started to question if this is even possible with the hardware I have available or am I simply expecting too much here.
Or am I missing some best practice, some good models, some good setup/configuration?
I use mostly the gpt4all application on windows with HF models.
| 2025-04-14T09:30:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyuqr3/llm_for_source_code_and_log_file_analysis/
|
scubid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyuqr3
| false | null |
t3_1jyuqr3
|
/r/LocalLLaMA/comments/1jyuqr3/llm_for_source_code_and_log_file_analysis/
| false | false |
self
| 0 | null |
Seeking Feedback on my "Road to Free Open AGI" Project (Website, Tools, Apps, Timeline)
| 1 |
[removed]
| 2025-04-14T09:46:23 |
https://freeopenagi.pages.dev
|
Savings_Heron_2153
|
freeopenagi.pages.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyuyl1
| false | null |
t3_1jyuyl1
|
/r/LocalLLaMA/comments/1jyuyl1/seeking_feedback_on_my_road_to_free_open_agi/
| false | false |
default
| 1 | null |
Help saving a mini.ai
| 1 |
[removed]
| 2025-04-14T09:48:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyuzo6/help_saving_a_miniai/
|
tm-administrator
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyuzo6
| false | null |
t3_1jyuzo6
|
/r/LocalLLaMA/comments/1jyuzo6/help_saving_a_miniai/
| false | false |
self
| 1 | null |
What would you say are the best open models for code generation?
| 6 |
I just thought I would pick the community's brain and see what people thought were the best language models for generating software. I am particularly interested in knowledge of the mechanics of structuring code, as well as Python and Javascript lanaguages, but I welcome all input on the best models for code generation in general.
My personal use case is not generating complete sofware per-se, but augmenting my own coding with AI generated testing and documentation through the CLI (not IDE). I love coding but I hate writing tests and documentation. I'd love to improve my efficiency and enjoyment by offloading testing and documentation to AI, so I am looking into how I would structure and implement that. I am not looking for productized solutions.
My ultimate goal is to have a model / models I can run locally or on my own servers.
| 2025-04-14T10:01:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyv6if/what_would_you_say_are_the_best_open_models_for/
|
awebb78
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyv6if
| false | null |
t3_1jyv6if
|
/r/LocalLLaMA/comments/1jyv6if/what_would_you_say_are_the_best_open_models_for/
| false | false |
self
| 6 | null |
Parsera 0.2.5 – Parse HTML with predictable data types
| 3 |
Hi everyone,
When parsing HTML with LLMs, you quickly run into weird inconsistencies, like asking for a price and getting `$19.99` one time, and just `19.99` the next time. Add in commas, quotes, or different locales, and it quickly becomes a big headache.
That’s why we just released **Parsera 0.2.5**, which introduces type control by leveraging structured outputs available in some models.
To learn more about typing, check out the doc: [https://docs.parsera.org/getting-started/#specify-output-types](https://docs.parsera.org/getting-started/#specify-output-types)
**P.S.** We hit a wall trying to get Gemini’s structured output to work with Pydantic models. If you’ve figured out a working setup or have any solid resources, please share!
| 2025-04-14T10:05:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyv8j5/parsera_025_parse_html_with_predictable_data_types/
|
Financial-Article-12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyv8j5
| false | null |
t3_1jyv8j5
|
/r/LocalLLaMA/comments/1jyv8j5/parsera_025_parse_html_with_predictable_data_types/
| false | false |
self
| 3 | null |
Best local model for rewording things that doesn't require a super computer
| 1 |
[removed]
| 2025-04-14T10:12:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvcf4/best_local_model_for_rewording_things_that_doesnt/
|
MoistMullet
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvcf4
| false | null |
t3_1jyvcf4
|
/r/LocalLLaMA/comments/1jyvcf4/best_local_model_for_rewording_things_that_doesnt/
| false | false |
self
| 1 | null |
🔥 MCP + Gemini 2.5 Pro 🔥
| 0 |
The new 'mcp-use' project is really cool!
You can use any MCP server as a tool with Langchain in just a few lines of code.
Go build with this 👇
GitHub: [https://github.com/mcp-use/mcp-use](https://github.com/mcp-use/mcp-use)
https://preview.redd.it/3zwazizh1sue1.png?width=3332&format=png&auto=webp&s=fd616280386275c049293805d71cb0dba295b84f
| 2025-04-14T10:24:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvisc/mcp_gemini_25_pro/
|
Dart7989
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvisc
| false | null |
t3_1jyvisc
|
/r/LocalLLaMA/comments/1jyvisc/mcp_gemini_25_pro/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'FwjlN8DqzUeHEsuRxXb6cinImWEYJrHfLT2W1d-YTp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=108&crop=smart&auto=webp&s=22447f465c751b788926b2c8529ab3e1bc1e980a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=216&crop=smart&auto=webp&s=88c03ec6bc3b814933a8e0af847db81f53b48235', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=320&crop=smart&auto=webp&s=c6063d95dd690821232bd164ea8e347d897d3020', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=640&crop=smart&auto=webp&s=e16829a0e9dd6c994332e23eeec3af4ec2cf250b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=960&crop=smart&auto=webp&s=184839b20079ed8b52de68452b63cf68efb4a381', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?width=1080&crop=smart&auto=webp&s=86551576fabc2e241ebfee2a1c401e2ee278cbe6', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/pQksJaF2mwlXxxuN6xAzpnxDHWTax6v9ShWD06zNPI4.jpg?auto=webp&s=a00eb40b2fa4fbc001f307fe82e1a7bad9dc4097', 'width': 2560}, 'variants': {}}]}
|
|
Local longer context coding
| 0 |
So this weekend I spent vibe-coding various apps and found that just spamming the LLM until it generated what I wanted was quite a quick way to get something quick and dirty up and running.
However, it is then very heavy on context unless you take time to manage it (and then maybe it makes sense just to code normally).
It made me think, for those using local LLMs for coding, what LLMs are you using. I'd like to get something that works well up to, say around 200k context. With strength in structuring projects and python language.
Qwen 2.5 Coder 32B has a nominal 128k context. Is there anything better than this you can run locally?
| 2025-04-14T10:24:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvitj/local_longer_context_coding/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvitj
| false | null |
t3_1jyvitj
|
/r/LocalLLaMA/comments/1jyvitj/local_longer_context_coding/
| false | false |
self
| 0 | null |
[2504.02507] ZClip: Adaptive Spike Mitigation for LLM Pre-Training
| 4 |
Hey everyone! I'm one of the researchers behind **ZClip: Adaptive Spike Mitigation for LLM Pre-Training**.
ZClip is a lightweight and adaptive gradient clipping method designed to **reduce loss spikes during LLM training**. Instead of relying on a fixed threshold like traditional gradient clipping, ZClip uses a **z-score-based approach** to detect and clip only abnormal gradient spikes—those that significantly deviate from the recent moving average.
This helps maintain training stability without interfering with convergence, and it’s easy to integrate into any training loop.
🔗 **Pape**r: [https://huggingface.co/papers/2504.02507](https://huggingface.co/papers/2504.02507)
💻 **Cod**e: [github.com/bluorion-com/ZClip](https://github.com/bluorion-com/ZClip)
Would love to hear your thoughts or questions!
| 2025-04-14T10:47:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvv8y/250402507_zclip_adaptive_spike_mitigation_for_llm/
|
akanyaani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvv8y
| false | null |
t3_1jyvv8y
|
/r/LocalLLaMA/comments/1jyvv8y/250402507_zclip_adaptive_spike_mitigation_for_llm/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'sXuNRzgE_m3OyJPUYOM5g1I5cCOwKdjwUhYpK8M96I0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=108&crop=smart&auto=webp&s=22281dfeade15138c65d0fb2ad54f88a536fc3d3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=216&crop=smart&auto=webp&s=1ffaafd82602d94941b28be0b8f83a88132a0090', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=320&crop=smart&auto=webp&s=c68b37de113bd63ac8666cc714899f95f246be89', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=640&crop=smart&auto=webp&s=274d183e9b355a70139984302f1b6d5200ca2c77', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=960&crop=smart&auto=webp&s=ce5fa3de95e9678af362eba018b21c926e35bb99', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?width=1080&crop=smart&auto=webp&s=a3662c769f04553656e8662013978447cf03f614', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Swd9uQN43Dpl2SJyH6zjTbJAdRaXwKbmzZwM9L2rPXk.jpg?auto=webp&s=4bfd90ade0743a669b69118cc8abd97f5cf43d5f', 'width': 1200}, 'variants': {}}]}
|
Moving from 48 to 64 NVRAM. What could you do extra?
| 3 |
If you could replace 2x3090 with 2x5090 are there any models that would make a difference to coding, text generation and processing, writing, etc.
Not asking if worth it, consider this money no object question (reasons). Thanks.
| 2025-04-14T10:51:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvxk4/moving_from_48_to_64_nvram_what_could_you_do_extra/
|
Otherwise-Tiger3359
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvxk4
| false | null |
t3_1jyvxk4
|
/r/LocalLLaMA/comments/1jyvxk4/moving_from_48_to_64_nvram_what_could_you_do_extra/
| false | false |
self
| 3 | null |
GLM-4-0414 (9B/32B) (w. & wo. reasoning) Ready to Release
| 89 |
Seems the developer is making final preparations : [https://github.com/zRzRzRzRzRzRzR/GLM-4](https://github.com/zRzRzRzRzRzRzR/GLM-4) (note this is developer's fork, only for reference)
Huggingface collection is created (but empty for now): [https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
The release contains following models:
https://preview.redd.it/6j2pwsl17sue1.png?width=943&format=png&auto=webp&s=55349ae54f8626f4a068dde1f33b750d87236395
| 2025-04-14T10:55:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyvzqg/glm40414_9b32b_w_wo_reasoning_ready_to_release/
|
NeterOster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyvzqg
| false | null |
t3_1jyvzqg
|
/r/LocalLLaMA/comments/1jyvzqg/glm40414_9b32b_w_wo_reasoning_ready_to_release/
| false | false | 89 |
{'enabled': False, 'images': [{'id': '7wcdPSKtZGnWukVEMp0hzZXVjiysDeaSaX9hge3AgJ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=108&crop=smart&auto=webp&s=0fb057810e1d4ad78e7445aa4c92366903348727', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=216&crop=smart&auto=webp&s=cd2d76a40dd032dbea9367ce654505c95d2ce8ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=320&crop=smart&auto=webp&s=180b59744bfd9593b9ec61a6dcda1254c2a7e94e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=640&crop=smart&auto=webp&s=71971286a2d2292f2a0a2b67094dc5e3c3a4b46e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=960&crop=smart&auto=webp&s=e2bb23b3974b673a5dbc5ab2a4227b4a3a7327ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?width=1080&crop=smart&auto=webp&s=8eb3f621973a3cde728dfed2b1a086eb6e2ed7ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rUMQwGzzv049_AQ65R_I2zx8r9Fk1GPQDozFx082Elc.jpg?auto=webp&s=ddec5051983fba70272d10d498b2feae20494369', 'width': 1200}, 'variants': {}}]}
|
|
What Happens When Two AIs Talk Alone?
| 1 |
[removed]
| 2025-04-14T10:56:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyw0el/what_happens_when_two_ais_talk_alone/
|
nik0rr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyw0el
| false | null |
t3_1jyw0el
|
/r/LocalLLaMA/comments/1jyw0el/what_happens_when_two_ais_talk_alone/
| false | false |
self
| 1 | null |
Zhipu AI Set to Release New GLM Models Today
| 6 |
Someone in the community posted today saying "[It's been a while since Zhipu AI released a new GLM model.](https://www.reddit.com/r/LocalLLaMA/comments/1jyr38c/its_been_a_while_since_zhipu_ai_released_a_new/)"
It appears that Zhipu AI is launching a new series of GLM-4 models today after quite some time since their last release. According to information I've seen in the [Github](https://github.com/zRzRzRzRzRzRzR/GLM-4), they're releasing multiple variants including chat and reasoning models in two different sizes: 9B and 32B parameter versions.
The model collection link is already live on [Hugging Face](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e), so let's stay tuned for the good news!
| 2025-04-14T11:01:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyw3g0/zhipu_ai_set_to_release_new_glm_models_today/
|
nekofneko
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyw3g0
| false | null |
t3_1jyw3g0
|
/r/LocalLLaMA/comments/1jyw3g0/zhipu_ai_set_to_release_new_glm_models_today/
| false | false |
self
| 6 | null |
Am i the only one who thinks that chatgpt is better than gemini on daily basis? İ just asked gemini to test boost diet and he recom eating plant seeds
| 0 | 2025-04-14T11:17:28 |
FRENLYFROK
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jywcvh
| false | null |
t3_1jywcvh
|
/r/LocalLLaMA/comments/1jywcvh/am_i_the_only_one_who_thinks_that_chatgpt_is/
| false | false |
default
| 0 |
{'enabled': True, 'images': [{'id': 'dhxluqg2bsue1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/dhxluqg2bsue1.jpeg?width=108&crop=smart&auto=webp&s=1c5d6353a20b52b721140a801cdbb94b97ae8109', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/dhxluqg2bsue1.jpeg?width=216&crop=smart&auto=webp&s=374b50dd72b891813b47d61ba646503900732799', 'width': 216}], 'source': {'height': 225, 'url': 'https://preview.redd.it/dhxluqg2bsue1.jpeg?auto=webp&s=a68f8bc3a2b9983510c46b4f40b0a9dd20512317', 'width': 225}, 'variants': {}}]}
|
||
Why is Qwen 2.5 Omni not being talked about enough?
| 155 |
I think the Qwen models are pretty good, I've been using a lot of them locally.
They recently (a week or some ago) released 2.5 Omni, which is a 7B real-time multimodal model, that simultaneously generates text and natural speech.
[Qwen/Qwen2.5-Omni-7B · Hugging Face](https://huggingface.co/Qwen/Qwen2.5-Omni-7B)
I think It would be great to use for something like a local AI alexa clone. But on youtube there's almost no one testing it, and even here, not a lot of people talking about it.
What is it?? Am I over-expecting from this model? or I'm just not well informed about alternatives, please enlighten me.
| 2025-04-14T11:23:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jywg95/why_is_qwen_25_omni_not_being_talked_about_enough/
|
BeetranD
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jywg95
| false | null |
t3_1jywg95
|
/r/LocalLLaMA/comments/1jywg95/why_is_qwen_25_omni_not_being_talked_about_enough/
| false | false |
self
| 155 |
{'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=216&crop=smart&auto=webp&s=40b0375e578ca4f668a3ee8bbee01ca36a53dc33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=320&crop=smart&auto=webp&s=acd6eb3a6932c652999662ecd70347363a4fd239', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=640&crop=smart&auto=webp&s=be40495e2b1d57173ebf46c043544693d2bbcf52', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=960&crop=smart&auto=webp&s=8d4cd071bba5a29a1efc8118ed14b418cb6e500a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=1080&crop=smart&auto=webp&s=a534d196d9729ef96f8237e1672864eb298352ff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?auto=webp&s=f06ebdc1d447d5c6303aaf69c9f8b09ec4f613cf', 'width': 1200}, 'variants': {}}]}
|
Experience with LightRAG
| 1 |
[removed]
| 2025-04-14T11:23:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jywgc8/experience_with_lightrag/
|
zero_coding
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jywgc8
| false | null |
t3_1jywgc8
|
/r/LocalLLaMA/comments/1jywgc8/experience_with_lightrag/
| false | false |
self
| 1 | null |
RTX 4070 and LLM's for text aggregation
| 1 |
[removed]
| 2025-04-14T11:41:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jywrrk/rtx_4070_and_llms_for_text_aggregation/
|
Away_Cartoonist_1053
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jywrrk
| false | null |
t3_1jywrrk
|
/r/LocalLLaMA/comments/1jywrrk/rtx_4070_and_llms_for_text_aggregation/
| false | false |
self
| 1 | null |
Best model to generate audio for a video?
| 1 |
[removed]
| 2025-04-14T11:46:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1jywuul/best_model_to_generate_audio_for_a_video/
|
Swimming_Screen_4655
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jywuul
| false | null |
t3_1jywuul
|
/r/LocalLLaMA/comments/1jywuul/best_model_to_generate_audio_for_a_video/
| false | false |
self
| 1 | null |
Latest frontier models are drunk professors
| 49 | 2025-04-14T11:55:20 |
https://x.com/hyperknot/status/1911747818890432860
|
hyperknot
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyx01y
| false | null |
t3_1jyx01y
|
/r/LocalLLaMA/comments/1jyx01y/latest_frontier_models_are_drunk_professors/
| false | false |
default
| 49 | null |
|
Hybrid Mamba Transformer VS Transformer architecture explanation
| 1 |
A short video explaining the differences between Transformer architecture and RNN (Recurrent Neural Networks) and the decisions that lead companies like Hunyuan to use Hybrid Mamba Transformer architecture that combines both.
X Post: [https://x.com/tencenthunyuan/status/1911746333662404932](https://x.com/tencenthunyuan/status/1911746333662404932)
| 2025-04-14T12:02:55 |
https://v.redd.it/nhrgqqttisue1
|
ResearchCrafty1804
|
/r/LocalLLaMA/comments/1jyx573/hybrid_mamba_transformer_vs_transformer/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyx573
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nhrgqqttisue1/DASHPlaylist.mpd?a=1747353791%2CZmI4YTNlMTFhYmVkM2RlN2ZkZjY4MGEwMTdkMjg2MTc1MDBlOTFkMDM1OTE1Y2NmMWJhMWE1ZmQ2YTNmM2UzNw%3D%3D&v=1&f=sd', 'duration': 319, 'fallback_url': 'https://v.redd.it/nhrgqqttisue1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/nhrgqqttisue1/HLSPlaylist.m3u8?a=1747353791%2CZTIyMGI3NTRiYzFmZmRmN2M5NjNlZDk5ZTRkOTU0OWJjYmMxMGJlNzUxNGM3OTAxZGZiMzQ4NzI5MGQ2N2NmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nhrgqqttisue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1jyx573
|
/r/LocalLLaMA/comments/1jyx573/hybrid_mamba_transformer_vs_transformer/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T.png?width=108&crop=smart&format=pjpg&auto=webp&s=d1cc799be70d2781d351debc42e7a7d39901d7b8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T.png?width=216&crop=smart&format=pjpg&auto=webp&s=1d3f2a367b24e3ad50c9bac901e26d81df425b91', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T.png?width=320&crop=smart&format=pjpg&auto=webp&s=fc7c8abe7b609f538a237a47fedfeb0bff8bb60f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T.png?width=640&crop=smart&format=pjpg&auto=webp&s=cc2de035cd01a2af97d335be98548ad8138d685e', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dzBicWlvZjNqc3VlMcE8jpH6t9htapupVHZeOBlFiMAGGMr-HEspqT6z-i6T.png?format=pjpg&auto=webp&s=c1b61042a79a10e65a4a126c875efd568a726ff5', 'width': 720}, 'variants': {}}]}
|
|
Hybrid Mamba Transformer VS Transformer architecture explanation
| 26 |
[A short video explaining the differences between Transformer architecture and RNN \(Recurrent Neural Networks\) and the decisions that lead companies like Hunyuan to use Hybrid Mamba Transformer architecture that combines both.](https://reddit.com/link/1jyx6yb/video/5py7irqhjsue1/player)
A short video explaining the differences between Transformer architecture and RNN (Recurrent Neural Networks) and the decisions that lead companies like Hunyuan to use Hybrid Mamba Transformer architecture that combines both.
X Post: [https://x.com/tencenthunyuan/status/1911746333662404932](https://x.com/tencenthunyuan/status/1911746333662404932)
| 2025-04-14T12:05:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyx6yb/hybrid_mamba_transformer_vs_transformer/
|
ResearchCrafty1804
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyx6yb
| false | null |
t3_1jyx6yb
|
/r/LocalLLaMA/comments/1jyx6yb/hybrid_mamba_transformer_vs_transformer/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': 'U8HHJRRRxyVX30Dr-FTWIrM-VGdPMreEuCQPuAYSYiE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vUqwSGK4ls7qPLhFNgUyUViik9ORJBr2OXMEammJfpk.jpg?width=108&crop=smart&auto=webp&s=0b1dc492b2e257e077b3da9010ce058519078855', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vUqwSGK4ls7qPLhFNgUyUViik9ORJBr2OXMEammJfpk.jpg?width=216&crop=smart&auto=webp&s=9f9b13e1668d2f38ffc32e5bd63604bb80f68586', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vUqwSGK4ls7qPLhFNgUyUViik9ORJBr2OXMEammJfpk.jpg?width=320&crop=smart&auto=webp&s=ad641d57e5b1e3a401bc0ee13b5facb1dd481e39', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vUqwSGK4ls7qPLhFNgUyUViik9ORJBr2OXMEammJfpk.jpg?width=640&crop=smart&auto=webp&s=9d2041d21b3afca22c1de980b3f5fc8348ac954c', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/vUqwSGK4ls7qPLhFNgUyUViik9ORJBr2OXMEammJfpk.jpg?auto=webp&s=8378c7faaa2d2a5da8fd596d3e8c51f87c89d649', 'width': 720}, 'variants': {}}]}
|
Use Cursor with Ollama and local models
| 1 |
[removed]
| 2025-04-14T12:13:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyxcmj/use_cursor_with_ollama_and_local_models/
|
Quick-Ad-8660
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyxcmj
| false | null |
t3_1jyxcmj
|
/r/LocalLLaMA/comments/1jyxcmj/use_cursor_with_ollama_and_local_models/
| false | false |
self
| 1 | null |
Where and How to make the rising intonation of words with Python api and get the mp3 file (kokoro, sesame-maya, etc)? for example, pronounce 'apple' as 'apple?'
| 1 |
[removed]
| 2025-04-14T12:35:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyxrkr/where_and_how_to_make_the_rising_intonation_of/
|
blackantt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyxrkr
| false | null |
t3_1jyxrkr
|
/r/LocalLLaMA/comments/1jyxrkr/where_and_how_to_make_the_rising_intonation_of/
| false | false |
self
| 1 | null |
GMKtec EVO-X2 Presale Opens 15 April 12am PDT!
| 17 |
Really excited as framework doesn't deliver to my place
| 2025-04-14T12:42:39 |
https://www.gmktec.com/pages/evo-x2?spm=..index.image_slideshow_1.1
|
NeonRitual
|
gmktec.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyxwob
| false | null |
t3_1jyxwob
|
/r/LocalLLaMA/comments/1jyxwob/gmktec_evox2_presale_opens_15_april_12am_pdt/
| false | false |
default
| 17 | null |
What can I do with RTX 5090 that I couldn't do with RTX 4090
| 20 |
Hi, the question like in the topic, i am not limiting myself only to llm. It could be video generation/sound/text/3d models etc.
Best regards
| 2025-04-14T13:08:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyygcj/what_can_i_do_with_rtx_5090_that_i_couldnt_do/
|
polawiaczperel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyygcj
| false | null |
t3_1jyygcj
|
/r/LocalLLaMA/comments/1jyygcj/what_can_i_do_with_rtx_5090_that_i_couldnt_do/
| false | false |
self
| 20 | null |
Looking for some high quality local TTS
| 1 |
[removed]
| 2025-04-14T13:09:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyygsb/looking_for_some_high_quality_local_tts/
|
No_Chair9618
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyygsb
| false | null |
t3_1jyygsb
|
/r/LocalLLaMA/comments/1jyygsb/looking_for_some_high_quality_local_tts/
| false | false |
self
| 1 | null |
Apparently Llama 3.2 thinks this is true.
| 1 | 2025-04-14T13:25:37 |
NeedNegativeAura
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyytcl
| false | null |
t3_1jyytcl
|
/r/LocalLLaMA/comments/1jyytcl/apparently_llama_32_thinks_this_is_true/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': '169vgsbvxsue1', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=108&crop=smart&auto=webp&s=94abfd0faf95d213c9a7d5abe5ca9957f9e03e42', 'width': 108}, {'height': 48, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=216&crop=smart&auto=webp&s=9b2617e2c1af6a519ad557aae2a8306df85f7147', 'width': 216}, {'height': 71, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=320&crop=smart&auto=webp&s=6c2d403f7f55556f87b5705cf9c63423a78cbd36', 'width': 320}, {'height': 143, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=640&crop=smart&auto=webp&s=e6934c1416cdd40fbccc54f48b5be694f3a6d002', 'width': 640}, {'height': 214, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=960&crop=smart&auto=webp&s=1916e6cbe1070b7a9c1858c96cc42fc4ea1cf909', 'width': 960}, {'height': 241, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?width=1080&crop=smart&auto=webp&s=53981693e04699bd66ad7e18be61828936b99e91', 'width': 1080}], 'source': {'height': 580, 'url': 'https://preview.redd.it/169vgsbvxsue1.png?auto=webp&s=bd541bceb870e87cb792fed1444fa04d279ea98b', 'width': 2592}, 'variants': {}}]}
|
||
Traitorous Models: Benchmarking Open Source Models on ‘The Traitors’
| 1 |
[removed]
| 2025-04-14T13:33:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyyz87/traitorous_models_benchmarking_open_source_models/
|
Embarrassed_Towel_63
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyyz87
| false | null |
t3_1jyyz87
|
/r/LocalLLaMA/comments/1jyyz87/traitorous_models_benchmarking_open_source_models/
| false | false |
self
| 1 | null |
Your thoughts on AI-2027?
| 0 |
I listened to the Hard Fork interview with the author and then read the paper. To me it feels very accurate, but also is quite bleak. But I can't figure out why it wouldn't play out this way, other than AI research hitting some sort of wall. (be it 2 years or 5 years).
\[www.nytimes.com/2025/04/11/podcasts/hardfork-tariffs-ai-2027-llama.html\](http://www.nytimes.com/2025/04/11/podcasts/hardfork-tariffs-ai-2027-llama.html)
\[AI-2027.com\](http://AI-2027.com)
| 2025-04-14T13:34:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyz0m1/your_thoughts_on_ai2027/
|
DrDisintegrator
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyz0m1
| false | null |
t3_1jyz0m1
|
/r/LocalLLaMA/comments/1jyz0m1/your_thoughts_on_ai2027/
| false | false |
self
| 0 | null |
Kimina-Prover Preview - New SOTA on theorem proving 80.7% miniF2F
| 45 |
New SOTA of 80.7% for theorem proving on \`miniF2F\`!
Idea is to combine reasoning models (o1/r1-style) with formal maths (Lean 4) and apply RL to get human-readable proofs.
Distilled Kimina-Prover 1.5B & 7B models on [🤗 Hugging Face](https://huggingface.co/collections/AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9)
https://preview.redd.it/5hxdploeysue1.png?width=1590&format=png&auto=webp&s=81f9b08c6e6eb2382c7eecb53bd589e0f2c3e3cd
IMO 1968 P5 (1st part) solution found by Kimina-Prover:
https://preview.redd.it/96slg6sszsue1.png?width=1654&format=png&auto=webp&s=52904f263895c9f13318e3c9fb1933855aa4c4f8
https://preview.redd.it/ns8p29lwzsue1.png?width=1652&format=png&auto=webp&s=039dfa8aab4bc272b8578642502e1a9eb33e6aeb
📑 Technical report: [Kimina\_Prover\_Preview.pdf](https://github.com/MoonshotAI/Kimina-Prover-Preview/blob/master/Kimina_Prover_Preview.pdf)
🤗 Models: [AI-MO/kimina-prover-preview](https://huggingface.co/collections/AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9)
| 2025-04-14T13:39:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyz4a1/kiminaprover_preview_new_sota_on_theorem_proving/
|
frunkp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyz4a1
| false | null |
t3_1jyz4a1
|
/r/LocalLLaMA/comments/1jyz4a1/kiminaprover_preview_new_sota_on_theorem_proving/
| false | false | 45 |
{'enabled': False, 'images': [{'id': 'XgXbDwDLXrPNlEm9YZsIbio7LOV_kUHeQPDm_cguVFE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=108&crop=smart&auto=webp&s=94bde055bb23f56bf4435dd48cdd67ac004047c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=216&crop=smart&auto=webp&s=dd31d6fbe69a14f3d8a364b76691423aaeddeb98', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=320&crop=smart&auto=webp&s=857233da2ec05cb139782371330152b1b345112e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=640&crop=smart&auto=webp&s=03d8c19b4f8528a22fe82f93610c49e12edac5a4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=960&crop=smart&auto=webp&s=048cf71d9f18c0d6e9886fa7551524d2a4a09060', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?width=1080&crop=smart&auto=webp&s=66a5592b0d7377d9e5eb86867a446c1407524169', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LUWPRYVOciUJ48L73KckvO0xvxcEDfER5j_R7LwvqHE.jpg?auto=webp&s=a96e2ec5040babe13662a789bb11003d4723c2d5', 'width': 1200}, 'variants': {}}]}
|
|
Looking for some high quality tts for cloning a voice
| 1 |
[removed]
| 2025-04-14T13:43:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyz6vk/looking_for_some_high_quality_tts_for_cloning_a/
|
No_Chair9618
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyz6vk
| false | null |
t3_1jyz6vk
|
/r/LocalLLaMA/comments/1jyz6vk/looking_for_some_high_quality_tts_for_cloning_a/
| false | false |
self
| 1 | null |
DGX B200 Startup ASMR
| 276 |
We just installed one of these beasts in our datacenter. Since I could not find a video that shows one of these machines running with original sound here you go!
Thats probably \~110dB of fan noise given that the previous generation was at around 106dB according to Nvidia. Cooling 1kW GPUs seems to be no joke given that this machine sounds like a fighter jet starting its engines next to you :D
| 2025-04-14T14:00:52 |
https://v.redd.it/yy6c2lvz3tue1
|
Chemical-Mixture3481
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyzl0g
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yy6c2lvz3tue1/DASHPlaylist.mpd?a=1747231275%2CY2U5MzU0MGVjNTY2ZGJmZGFiN2U3ZmYzNDk1YWNhMmUyMWEyMDIxZmFiOTU3NTU1NTg4YTQ0OWYzMGI5ZjBkYQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/yy6c2lvz3tue1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/yy6c2lvz3tue1/HLSPlaylist.m3u8?a=1747231275%2CYjMyZmNjYWZkMTg1N2JlM2EyMWJhZGYxOGViYmQ5OTYzY2MxNzk5ZjAzYjQ0N2Y5ZjlkOGY0MjdmZmZhYjYxYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yy6c2lvz3tue1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1jyzl0g
|
/r/LocalLLaMA/comments/1jyzl0g/dgx_b200_startup_asmr/
| false | false | 276 |
{'enabled': False, 'images': [{'id': 'YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H', 'resolutions': [{'height': 190, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=108&crop=smart&format=pjpg&auto=webp&s=9a4690a8afb1f31d17308878788e6ad3390d3e3d', 'width': 108}, {'height': 381, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=216&crop=smart&format=pjpg&auto=webp&s=005321f6d7665d0b719d3e463997912fa4d5d110', 'width': 216}, {'height': 564, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=320&crop=smart&format=pjpg&auto=webp&s=88548e84e22e88776ecc1c709122a62e0caadc0a', 'width': 320}, {'height': 1129, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=640&crop=smart&format=pjpg&auto=webp&s=1a7e2f2548348aedd67cbcf036b2216b7c8b3d24', 'width': 640}, {'height': 1694, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=960&crop=smart&format=pjpg&auto=webp&s=f371d11818cdd4d42d66205330de00c12c4df708', 'width': 960}, {'height': 1905, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6aa1bf94df2f1be4d1d9e4e6dd2c6988ac74169b', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/YTF4eTdsdnozdHVlMTsTLvzMSe_uV5dg8VNzSYJEyMCa9wyDSSGv4dzqg19H.png?format=pjpg&auto=webp&s=170a5cc75de00b1a0323b3a566a302120f6f4a61', 'width': 1088}, 'variants': {}}]}
|
|
What do I need to deploy my own LLM
| 9 |
Hey guys!
I was wondering the hardware requirements to deploy a local LLM. Is there a table or a websites that compare different LLMs in terms of RAM and GPU requirements, inference time and electrical power required to run it?
This is considering a pre-trained model only used for inference.
Thank you for the help!
| 2025-04-14T14:12:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyzuxv/what_do_i_need_to_deploy_my_own_llm/
|
Vinser_98
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyzuxv
| false | null |
t3_1jyzuxv
|
/r/LocalLLaMA/comments/1jyzuxv/what_do_i_need_to_deploy_my_own_llm/
| false | false |
self
| 9 | null |
New Tutorial on GitHub - Build an AI Agent with MCP
| 39 |
This tutorial walks you through: Building your own MCP server with real tools (like crypto price lookup) Connecting it to Claude Desktop and also creating your own custom agent Making the agent reason when to use which tool, execute it, and explain the result what's inside:
* Practical Implementation of MCP from Scratch
* End-to-End Custom Agent with Full MCP Stack
* Dynamic Tool Discovery and Execution Pipeline
* Seamless Claude 3.5 Integration
* Interactive Chat Loop with Stateful Context
* Educational and Reusable Code Architecture
Link to the tutorial:
[https://github.com/NirDiamant/GenAI\_Agents/blob/main/all\_agents\_tutorials/mcp-tutorial.ipynb](https://github.com/NirDiamant/GenAI_Agents/blob/main/all_agents_tutorials/mcp-tutorial.ipynb)
enjoy :)
| 2025-04-14T14:13:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jyzvcr/new_tutorial_on_github_build_an_ai_agent_with_mcp/
|
Nir777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jyzvcr
| false | null |
t3_1jyzvcr
|
/r/LocalLLaMA/comments/1jyzvcr/new_tutorial_on_github_build_an_ai_agent_with_mcp/
| false | false |
self
| 39 |
{'enabled': False, 'images': [{'id': 'qtwkttVOIaaxigfnhQzkKwBafwFg9rYWq4qGR70kCb4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=108&crop=smart&auto=webp&s=e6f6bf226d3fe33c4a42c497ecb2e93789640169', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=216&crop=smart&auto=webp&s=1f89e724d8d4b2c5147a3be8862df6026e958d31', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=320&crop=smart&auto=webp&s=4a4c094e14df16818526130079994b2ccf1a2375', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=640&crop=smart&auto=webp&s=44881d13745e67485e437b1699ec612fd20106d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=960&crop=smart&auto=webp&s=245dc3c2be39a6a33eea13125825e051a98001b5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?width=1080&crop=smart&auto=webp&s=c10832eae9c2f31cc0d6494affce7978e6b84742', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1cKrVra1dY8ARu9QO9iRiKYLEF2AEfFlyppNIiJ5Sak.jpg?auto=webp&s=c5e2bdcdfcb7ff3ae89e19530e2b87d24803818b', 'width': 1200}, 'variants': {}}]}
|
Dataset sizes for LoRa fine tuning (phi4)
| 1 |
Hi all, I have quite a bit of experience on the image generation side of things and training LoRa’s on subject generation but I’m still learning about text generation. I’m curious what typical dataset sizes look like for training LoRas for LLMs. For example, if I want to train a LoRa for a phi4 model to do a fairly simple summarization task.
I would provide it the most recent score on a questionnaire, as well as a previous one if this isn’t the first time the person fills out the questionnaire. It would look something like:
Question: “Over the past month, how would you rate your financial situation?
Response: Poor
Previous response: Neutral
And I’d be looking to generate an output like:
It seems like your financial situation has gotten worse since your previous questionnaire. Is that correct?
Out of the box the model is good at this for simple questions like this, but often trips up with things like double negatives or framing the summarization properly if the questions are written in the first person (ex: Over the past my financial situation could be described as…).
| 2025-04-14T14:29:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz08ye/dataset_sizes_for_lora_fine_tuning_phi4/
|
putinwhat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz08ye
| false | null |
t3_1jz08ye
|
/r/LocalLLaMA/comments/1jz08ye/dataset_sizes_for_lora_fine_tuning_phi4/
| false | false |
self
| 1 | null |
LLM and embedding for deep qualitative research (semantical)
| 1 |
[removed]
| 2025-04-14T14:30:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz0a14/llm_and_embedding_for_deep_qualitative_research/
|
mariagilda
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz0a14
| false | null |
t3_1jz0a14
|
/r/LocalLLaMA/comments/1jz0a14/llm_and_embedding_for_deep_qualitative_research/
| false | false |
self
| 1 | null |
Offline Evaluation: Necessary But Not Sufficient For Real-World AI Assessment
| 1 |
[removed]
| 2025-04-14T14:31:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz0b4a/offline_evaluation_necessary_but_not_sufficient/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz0b4a
| false | null |
t3_1jz0b4a
|
/r/LocalLLaMA/comments/1jz0b4a/offline_evaluation_necessary_but_not_sufficient/
| false | false | 1 | null |
|
Offline Evaluation: Necessary But Not Sufficient For Real-World AI Assessment
| 1 |
[removed]
| 2025-04-14T14:33:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz0crq/offline_evaluation_necessary_but_not_sufficient/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz0crq
| false | null |
t3_1jz0crq
|
/r/LocalLLaMA/comments/1jz0crq/offline_evaluation_necessary_but_not_sufficient/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'o8c3vvT14g-6JGwISRwovwVhy0f4FGLcO3CF3c1252k', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/OC40jqk1xLsV-NnowbHYmbhYbQGEVBSiz2Hi98oTuec.jpg?width=108&crop=smart&auto=webp&s=3984feea6f0d9ac9bbaa2d10d8822bd76ac88635', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/OC40jqk1xLsV-NnowbHYmbhYbQGEVBSiz2Hi98oTuec.jpg?width=216&crop=smart&auto=webp&s=cb8efeab0648071482df7b195e5f096458e1aad2', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/OC40jqk1xLsV-NnowbHYmbhYbQGEVBSiz2Hi98oTuec.jpg?width=320&crop=smart&auto=webp&s=ef1fde1e16e6410f649a48d1d5eb09bba89a5b8f', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/OC40jqk1xLsV-NnowbHYmbhYbQGEVBSiz2Hi98oTuec.jpg?width=640&crop=smart&auto=webp&s=6ff626225a68ed5071b78e25a39ac127cc469cab', 'width': 640}], 'source': {'height': 1422, 'url': 'https://external-preview.redd.it/OC40jqk1xLsV-NnowbHYmbhYbQGEVBSiz2Hi98oTuec.jpg?auto=webp&s=b47c15f5eae6132ef3bc0e6b7fd7f14fa816bb45', 'width': 800}, 'variants': {}}]}
|
I'll miss Claude 2 when it retires.
| 0 |
I feel like it is the *last* LLM without ChatGPT slop in it. It sounds so natural and humanlike. I'll miss it when it's gone. I'm really disappointed in the state of LLMs now, due to the presence of ChatGPT data in them. It's so unbearable.
If anyone knows any LLMs that don't have this slop, do tell because I'm tired of it.
| 2025-04-14T14:43:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz0l3t/ill_miss_claude_2_when_it_retires/
|
TheSerbianRebel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz0l3t
| false | null |
t3_1jz0l3t
|
/r/LocalLLaMA/comments/1jz0l3t/ill_miss_claude_2_when_it_retires/
| false | false |
self
| 0 | null |
Running 50+ LLMs per GPU with sub-5s snapshot load times . anyone exploring model scheduling like this?
| 1 |
[removed]
| 2025-04-14T15:07:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz15sv/running_50_llms_per_gpu_with_sub5s_snapshot_load/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz15sv
| false | null |
t3_1jz15sv
|
/r/LocalLLaMA/comments/1jz15sv/running_50_llms_per_gpu_with_sub5s_snapshot_load/
| false | false |
self
| 1 | null |
Which is Better: DGX Spark(ASUS Ascent GX10) * 2 or M3 Ultra 256GB
| 1 |
[removed]
| 2025-04-14T15:28:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz1odh/which_is_better_dgx_sparkasus_ascent_gx10_2_or_m3/
|
CombinationEnough314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz1odh
| false | null |
t3_1jz1odh
|
/r/LocalLLaMA/comments/1jz1odh/which_is_better_dgx_sparkasus_ascent_gx10_2_or_m3/
| false | false |
self
| 1 | null |
NVIDIA has published new Nemotrons!
| 210 |
what a week....!
[https://huggingface.co/nvidia/Nemotron-H-56B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-56B-Base-8K)
[https://huggingface.co/nvidia/Nemotron-H-47B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-47B-Base-8K)
[https://huggingface.co/nvidia/Nemotron-H-8B-Base-8K](https://huggingface.co/nvidia/Nemotron-H-8B-Base-8K)
| 2025-04-14T15:29:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz1oxv/nvidia_has_published_new_nemotrons/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz1oxv
| false | null |
t3_1jz1oxv
|
/r/LocalLLaMA/comments/1jz1oxv/nvidia_has_published_new_nemotrons/
| false | false |
self
| 210 |
{'enabled': False, 'images': [{'id': '21OlSChJd_ryVaoRGpfQrH-m4iRC5j6vSPWZxsp53aI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=108&crop=smart&auto=webp&s=e0fc9926d09ddb030bcf3e791502c798fa4e2181', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=216&crop=smart&auto=webp&s=397618aaf4ec494f0a41c82913efd33448d1be27', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=320&crop=smart&auto=webp&s=0f62b00072faabdfdc2a1ab721590ab66257d6d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=640&crop=smart&auto=webp&s=e07d9bd25b20f658aaba1e413f3425b40eea7b8b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=960&crop=smart&auto=webp&s=009c352df2a328870da67961c46ecd1c0d1d9c89', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?width=1080&crop=smart&auto=webp&s=60bfb43a83bdbf87a5d7c9c3bd5dd773e15dacd6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8mzPbnKeIxxyO_IKiOpBsQ0XmDUDlgzszlbvzJe7WfM.jpg?auto=webp&s=3035c78a7d569cb628866662a6882903dae9fd7a', 'width': 1200}, 'variants': {}}]}
|
DGX Spark(Ascent GX10) vs M3 Ultra 256GB
| 1 |
[removed]
| 2025-04-14T15:38:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz1wrn/dgx_sparkascent_gx10_vs_m3_ultra_256gb/
|
CombinationEnough314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz1wrn
| false | null |
t3_1jz1wrn
|
/r/LocalLLaMA/comments/1jz1wrn/dgx_sparkascent_gx10_vs_m3_ultra_256gb/
| false | false |
self
| 1 | null |
Suggest me best Speech Language Models
| 2 |
I'm currently exploring speech language models available on the market for my project. I'd appreciate any recommendations or insights you might have. Thanks!
| 2025-04-14T15:48:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz25lm/suggest_me_best_speech_language_models/
|
Ai_Peep
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz25lm
| false | null |
t3_1jz25lm
|
/r/LocalLLaMA/comments/1jz25lm/suggest_me_best_speech_language_models/
| false | false |
self
| 2 | null |
Run Local LLMs in Google Colab for FREE — with GPU Acceleration & Public API Access! 💻🧠🚀
| 9 |
Hey folks! 👋
I just published a Colab notebook that lets you run local LLM models (like LLaMA3, Qwen, Mistral, etc.) for free in Google Colab using GPU acceleration — and the best part? It exposes the model through a public API using Cloudflare, so you can access it remotely from anywhere (e.g., with curl, Postman, or VS Code ROO Code extension).
No need to pay for a cloud VM or deal with Docker installs — it's plug & play!
🔗 GitHub Repo: [https://github.com/enescingoz/colab-llm](https://github.com/enescingoz/colab-llm)
# 🧩 Features:
* 🧠 Run local models (e.g., qwen2.5-coder, llama3) using [Ollama](https://ollama.com/)
* 🚀 Free Colab GPU support (T4 High-RAM recommended)
* 🌐 Public access with Cloudflared tunnel
* 🛠️ Easy to connect with ROO Code or your own scripts
* 📄 Full README and step-by-step instructions included
Let me know if you try it out, or if you'd like help running your own model! 🔥
| 2025-04-14T15:53:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz2am6/run_local_llms_in_google_colab_for_free_with_gpu/
|
evoura
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2am6
| false | null |
t3_1jz2am6
|
/r/LocalLLaMA/comments/1jz2am6/run_local_llms_in_google_colab_for_free_with_gpu/
| false | false |
self
| 9 |
{'enabled': False, 'images': [{'id': 'zZzME25wADG2y_TeJVCMq6oYZd7rDJTK1inxvVxciac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=108&crop=smart&auto=webp&s=052a7dc58228edff3922b23058be7dd8a6b1e122', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=216&crop=smart&auto=webp&s=4ad5c10fee976c914ceb801f38f740842acb6b71', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=320&crop=smart&auto=webp&s=3b8907e0537b78fc5a0f59bbee31ff62bd744b07', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=640&crop=smart&auto=webp&s=3aaa276fa984cfa3580683ef392a813faa74f449', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=960&crop=smart&auto=webp&s=f8006d1cc7a9a688c3f401d83c32b64c680ae021', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?width=1080&crop=smart&auto=webp&s=ac3b1be15a88110cbab78010d95f8e23c21b9ebc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jbhLmrh_LCilos-CHoriN5n1Lo_hc3AQakHZKnsC2IQ.jpg?auto=webp&s=8eb00a497d94badbd8bc9fd42e16332c483822d3', 'width': 1200}, 'variants': {}}]}
|
Building A Simple MCP Server: Step by Step Guide
| 14 |
MCP, or Model Context Protocol, is a groundbreaking framework that is rapidly gaining traction in the AI and large language model (LLM) community. It acts as a universal connector for AI systems, enabling seamless integration with external resources, APIs, and services. Think of MCP as a standardized protocol that allows LLMs to interact with tools and data sources in a consistent and efficient way, much like how USB-C works for devices.
In this tutorial, we will build our own MCP server using the Yahoo Finance Python API to fetch real-time stock prices, compare them, and provide historical analysis. This project is beginner-friendly, meaning you only need a basic understanding of Python to complete it.
[https://www.kdnuggets.com/building-a-simple-mcp-server](https://www.kdnuggets.com/building-a-simple-mcp-server)
| 2025-04-14T15:55:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz2cj6/building_a_simple_mcp_server_step_by_step_guide/
|
kingabzpro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2cj6
| false | null |
t3_1jz2cj6
|
/r/LocalLLaMA/comments/1jz2cj6/building_a_simple_mcp_server_step_by_step_guide/
| false | false |
self
| 14 | null |
Dolphin translator incoming (eventually)
| 11 | 2025-04-14T15:59:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz2g1b/dolphin_translator_incoming_eventually/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2g1b
| false | null |
t3_1jz2g1b
|
/r/LocalLLaMA/comments/1jz2g1b/dolphin_translator_incoming_eventually/
| false | false | 11 | null |
||
glm-4 0414 is out. 9b, 32b, with and without reasoning and rumination
| 292 |
[https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e](https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
6 new models and interesting benchmarks
>**GLM-Z1-32B-0414** is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. During training, we also introduced general reinforcement learning based on pairwise ranking feedback, which enhances the model's general capabilities.
>**GLM-Z1-Rumination-32B-0414** is a deep reasoning model with rumination capabilities (against OpenAI's Deep Research). Unlike typical deep thinking models, the rumination model is capable of deeper and longer thinking to solve more open-ended and complex problems (e.g., writing a comparative analysis of AI development in two cities and their future development plans). Z1-Rumination is trained through scaling end-to-end reinforcement learning with responses graded by the ground truth answers or rubrics and can make use of search tools during its deep thinking process to handle complex tasks. The model shows significant improvements in research-style writing and complex tasks.
>Finally, **GLM-Z1-9B-0414** is a surprise. We employed all the aforementioned techniques to train a small model (9B). GLM-Z1-9B-0414 exhibits excellent capabilities in mathematical reasoning and general tasks. Its overall performance is top-ranked among all open-source models of the same size. Especially in resource-constrained scenarios, this model achieves an excellent balance between efficiency and effectiveness, providing a powerful option for users seeking lightweight deployment.
| 2025-04-14T16:02:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz2iuc/glm4_0414_is_out_9b_32b_with_and_without/
|
matteogeniaccio
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2iuc
| false | null |
t3_1jz2iuc
|
/r/LocalLLaMA/comments/1jz2iuc/glm4_0414_is_out_9b_32b_with_and_without/
| false | false |
self
| 292 |
{'enabled': False, 'images': [{'id': 'cdE-sEOnlSrS4cSPJTU_wSWuuPbZrg6PCUPtobBPvHc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=108&crop=smart&auto=webp&s=0667f8a7dc0f5384f91369fae10caae8b0cf9112', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=216&crop=smart&auto=webp&s=27b96e4b49159b3d7a8deb67bc354bc0061a2e22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=320&crop=smart&auto=webp&s=b77ce9153c0c828754fefd16ba62d9a2a14a41c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=640&crop=smart&auto=webp&s=d2f468a7cb4f1f63cdc2c35347ae9f9d3abd7d3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=960&crop=smart&auto=webp&s=9c17d58e3acc1963bb54970b913b7058f6126161', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=1080&crop=smart&auto=webp&s=fa55d17645eccc8a41b935cc745203a7ddd99e07', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?auto=webp&s=1cedfa7cfb3183c4600b1dde931b823e871e5895', 'width': 1200}, 'variants': {}}]}
|
Shisa V2 - a family of new JA/EN bilingual models
| 30 |
It's hard to believe it was [only about a year and a half ago when we first released Shisa 7B](https://www.reddit.com/r/LocalLLaMA/comments/18cwh4n/shisa_7b_a_new_jaen_bilingual_model_based_on/). Since then, the quality of Japanese output from open LLMs has improved dramatically... but, still it could be better!
I'm happy to announce the release of [Shisa V2](https://shisa.ai/posts/shisa-v2/), the latest generation of our JA/EN models. We worked for months, running hundreds of test runs to improve performance, and it turns out that applying our final data/training recipe was able to improve Japanese output quality on basically every single model we tried, so, uh here's a bunch:
|License|Model Name|Parameters|Context Length|JA AVG|EN AVG|
|:-|:-|:-|:-|:-|:-|
|Apache 2.0|[shisa-v2-qwen2.5-7b](https://huggingface.co/shisa-ai/shisa-v2-qwen2.5-7b)|7B|128K/8K|71.06|54.86|
|Llama 3.1|[shisa-v2-llama3.1-8b](https://huggingface.co/shisa-ai/shisa-v2-llama3.1-8b)|8B|128K|70.83|54.75|
|Apache 2.0|[shisa-v2-mistral-nemo-12b](https://huggingface.co/shisa-ai/shisa-v2-mistral-nemo-12b)|12B|128K|72.83|53.33|
|MIT|[shisa-v2-unphi4-14b](https://huggingface.co/shisa-ai/shisa-v2-unphi4-14b)|14B|16K|75.89|60.10|
|Apache 2.0|[shisa-v2-qwen2.5-32b](https://huggingface.co/shisa-ai/shisa-v2-qwen2.5-32b)|32B|128K/8K|76.97|67.41|
|Llama 3.3|[shisa-v2-llama3.3-70b](https://huggingface.co/shisa-ai/shisa-v2-llama3.3-70b)|70B|128K|79.72|67.71|
These models are near or at SOTA for their respective size classes, and we maintain or even improve EN (MixEval, LiveBench, IFEval) perf as well:
[Not bad!](https://preview.redd.it/vj468u83otue1.png?width=5400&format=png&auto=webp&s=87439889b0868b7dd5b10b26ccad099e13fd074b)
Here's an interesting chart showing how our tune improves Japanese eval scores on top of the base models:
[Shisa V2 Improvement vs Base Models](https://preview.redd.it/d8k72rm9otue1.png?width=3600&format=png&auto=webp&s=93a1a3a62f935404c8f98a126c0b2f1dc0682011)
So even though baseline Japanese capabilities have improved greatly, applying additional training is still worthwhile.
During development, we also made a few new evals to track important, previously unmeasured downstream use cases:
* shisa-jp-ifeval: - Advanced instruction-following tasks in Japanese
* shisa-jp-rp-bench: - Personas, role-play, and multi-turn conversational capabilities
* shisa-jp-tl-bench: - High-quality Japanese-English translation proficiency
We'll be open sourcing these soon (code cleanup, once we get some sleep) to help make JA models better at these tasks.
These models are freshly baked, and we haven't had a lot of real world testing done yet, so welcome any real world feedback/testing from the community.
[Shisa V2!](https://preview.redd.it/rfk5tc2wptue1.jpg?width=1024&format=pjpg&auto=webp&s=d078fc6c1a3cf83ebdc4d8480a9821a2a983b603)
(btw for those interested in technical details, be sure to take a look at our model card for the nerdy stuff)
| 2025-04-14T16:05:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/
|
randomfoo2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2lll
| false | null |
t3_1jz2lll
|
/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/
| false | false | 30 | null |
|
GLM-4-0414 - a THUDM Collection
| 65 | 2025-04-14T16:15:11 |
https://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz2tol
| false | null |
t3_1jz2tol
|
/r/LocalLLaMA/comments/1jz2tol/glm40414_a_thudm_collection/
| false | false |
default
| 65 |
{'enabled': False, 'images': [{'id': 'cdE-sEOnlSrS4cSPJTU_wSWuuPbZrg6PCUPtobBPvHc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=108&crop=smart&auto=webp&s=0667f8a7dc0f5384f91369fae10caae8b0cf9112', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=216&crop=smart&auto=webp&s=27b96e4b49159b3d7a8deb67bc354bc0061a2e22', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=320&crop=smart&auto=webp&s=b77ce9153c0c828754fefd16ba62d9a2a14a41c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=640&crop=smart&auto=webp&s=d2f468a7cb4f1f63cdc2c35347ae9f9d3abd7d3e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=960&crop=smart&auto=webp&s=9c17d58e3acc1963bb54970b913b7058f6126161', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?width=1080&crop=smart&auto=webp&s=fa55d17645eccc8a41b935cc745203a7ddd99e07', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CbrIZBC-MoAMjzDIDvad-loXR06ele61H5F_oGZcxJQ.jpg?auto=webp&s=1cedfa7cfb3183c4600b1dde931b823e871e5895', 'width': 1200}, 'variants': {}}]}
|
|
What is your LLM daily runner ? (Poll)
| 27 |
[View Poll](https://www.reddit.com/poll/1jz30i1)
| 2025-04-14T16:22:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz30i1/what_is_your_llm_daily_runner_poll/
|
Nexter92
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz30i1
| false | null |
t3_1jz30i1
|
/r/LocalLLaMA/comments/1jz30i1/what_is_your_llm_daily_runner_poll/
| false | false |
self
| 27 | null |
We built an autonomous debugging agent. Here’s how it grokked a $100 bug
| 8 |
Everyone’s looking at MCP as a way to connect LLMs to tools.
**What about connecting LLMs to each other?**
**Deebo** is an autonomous debugging agent MCP server. It runs as a local daemon—your LLM coding agent can spin up a session with Deebo, offload a tricky bug, and let Deebo handle it asynchronously.
Here’s what it does:
* Spawns multiple subprocesses, each with a unique fix hypothesis
* Each scenario runs in a clean git branch, totally isolated
* A “mother agent” loops, tests, reasons, and returns a diagnosis with logs + a proposed patch
We tested it on a real $100 bounty in **tinygrad** (test\_failure\_53) and it:
* Identified GROUPTOP + uchar reduction as the problem
* Proposed two concrete fixes
* Passed the test (PR pending)
It didn’t regurgitate StackOverflow—it **grokked** the bug.
👉 [Here’s the repo](https://github.com/snagasuri/deebo-prototype)
Would love feedback from devs building agents, debugging AI, or working on LLM infra.
| 2025-04-14T16:23:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz316d/we_built_an_autonomous_debugging_agent_heres_how/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz316d
| false | null |
t3_1jz316d
|
/r/LocalLLaMA/comments/1jz316d/we_built_an_autonomous_debugging_agent_heres_how/
| false | false |
self
| 8 | null |
Running 50+ LLMs per GPU with sub-5s snapshot load times — anyone exploring model scheduling like this?
| 0 |
[removed]
| 2025-04-14T16:32:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz395j/running_50_llms_per_gpu_with_sub5s_snapshot_load/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz395j
| false | null |
t3_1jz395j
|
/r/LocalLLaMA/comments/1jz395j/running_50_llms_per_gpu_with_sub5s_snapshot_load/
| false | false |
self
| 0 | null |
Hi
| 1 |
[deleted]
| 2025-04-14T16:34:33 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3apv
| false | null |
t3_1jz3apv
|
/r/LocalLLaMA/comments/1jz3apv/hi/
| false | false |
default
| 1 | null |
||
Fiction.liveBench updated with Optimus Alpha, looks optimized for cost?
| 3 | 2025-04-14T16:35:44 |
fictionlive
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3bre
| false | null |
t3_1jz3bre
|
/r/LocalLLaMA/comments/1jz3bre/fictionlivebench_updated_with_optimus_alpha_looks/
| false | false |
default
| 3 |
{'enabled': True, 'images': [{'id': '6je0bmqpvtue1', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=108&crop=smart&auto=webp&s=6a2e38cfcf074704fbeb5a725aa29bf1bc44addb', 'width': 108}, {'height': 251, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=216&crop=smart&auto=webp&s=262a67ed2e303b366201df130689105b2bfd0eba', 'width': 216}, {'height': 372, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=320&crop=smart&auto=webp&s=6e14464b3e1de21a8e29339b69043cc3f6f8b619', 'width': 320}, {'height': 745, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=640&crop=smart&auto=webp&s=34f597c7952f2bcf9a011cb8d670825f357a600a', 'width': 640}, {'height': 1118, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=960&crop=smart&auto=webp&s=bd5e91929cfc9799b6151e4c7a68aa14db219f13', 'width': 960}, {'height': 1258, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?width=1080&crop=smart&auto=webp&s=1d91c0bdfe2b338bc74a2cba48d19da43ea6121f', 'width': 1080}], 'source': {'height': 2296, 'url': 'https://preview.redd.it/6je0bmqpvtue1.png?auto=webp&s=7a157ec6aac679a1eb19dc62d21fdb51a9d3d5c9', 'width': 1970}, 'variants': {}}]}
|
||
Any local models that create / modify files like claude ?
| 1 |
[removed]
| 2025-04-14T16:36:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3cgw/any_local_models_that_create_modify_files_like/
|
EfficientCoconut2739
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3cgw
| false | null |
t3_1jz3cgw
|
/r/LocalLLaMA/comments/1jz3cgw/any_local_models_that_create_modify_files_like/
| false | false |
self
| 1 | null |
new GLM models available
| 2 |
finally 32B!
[https://huggingface.co/THUDM/GLM-4-32B-0414](https://huggingface.co/THUDM/GLM-4-32B-0414)
[https://huggingface.co/THUDM/GLM-4-9B-0414](https://huggingface.co/THUDM/GLM-4-9B-0414)
[https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414)
[https://huggingface.co/THUDM/GLM-Z1-32B-0414](https://huggingface.co/THUDM/GLM-Z1-32B-0414)
| 2025-04-14T16:39:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3ew2/new_glm_models_available/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3ew2
| false | null |
t3_1jz3ew2
|
/r/LocalLLaMA/comments/1jz3ew2/new_glm_models_available/
| false | false |
self
| 2 | null |
Running 50+ LLMs per GPU with sub-5s snapshot load times — anyone exploring model scheduling like this for LLaMA models?
| 0 |
[removed]
| 2025-04-14T16:41:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3gkm/running_50_llms_per_gpu_with_sub5s_snapshot_load/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3gkm
| false | null |
t3_1jz3gkm
|
/r/LocalLLaMA/comments/1jz3gkm/running_50_llms_per_gpu_with_sub5s_snapshot_load/
| false | false |
self
| 0 | null |
GLM-4-0414 Series Model Released!
| 84 |
Based on official data, does GLM-4-32B-0414 outperform DeepSeek-V3-0324 and DeepSeek-R1?
Github Repo: [github.com/THUDM/GLM-4](http://github.com/THUDM/GLM-4)
HuggingFace: [huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e](http://huggingface.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e)
| 2025-04-14T16:41:53 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3gzd
| false | null |
t3_1jz3gzd
|
/r/LocalLLaMA/comments/1jz3gzd/glm40414_series_model_released/
| false | false | 84 |
{'enabled': True, 'images': [{'id': 'arJfXb5-Wx0u_OlJIc4syKu8CCAPg_6at6YcI54qfpI', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/sr09xoehwtue1.png?width=108&crop=smart&auto=webp&s=634229be8f1ac38725e74a5b943b68a3acc1d8e9', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/sr09xoehwtue1.png?width=216&crop=smart&auto=webp&s=8dc9d9a5b8bf6e8a1bb15a8e84f4ab3e0a0cfe2b', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/sr09xoehwtue1.png?width=320&crop=smart&auto=webp&s=413d5f5eaf0c9bcee7c43192070ff10e92473017', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/sr09xoehwtue1.png?width=640&crop=smart&auto=webp&s=5be750e141b5100afa3a2f71eb779ee767f9fe3c', 'width': 640}], 'source': {'height': 675, 'url': 'https://preview.redd.it/sr09xoehwtue1.png?auto=webp&s=2365fbe431f32eef51731591cdcf93f5a3fc1454', 'width': 836}, 'variants': {}}]}
|
||
Offline Evals: Necessary But Not Sufficient for Real-World Assessment
| 1 |
[removed]
| 2025-04-14T16:43:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3i12/offline_evals_necessary_but_not_sufficient_for/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3i12
| false | null |
t3_1jz3i12
|
/r/LocalLLaMA/comments/1jz3i12/offline_evals_necessary_but_not_sufficient_for/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'R3Iu3v1sIeJBtVD1Bma9i5SCmNkDjywBTkKU8SGinHw', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?width=108&crop=smart&auto=webp&s=887fc225f611dbe4bcd1cfb9625b5cc3b88b8f0f', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?width=216&crop=smart&auto=webp&s=f8ee59e3de76013242bb636612c8f4e4bcd244fe', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?width=320&crop=smart&auto=webp&s=41695d98d6b234a1d5f51a7a1fe0081d970a5d43', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?width=640&crop=smart&auto=webp&s=c5265c19be0bdbb65ad956cacfd1e58b23f77c1b', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?width=960&crop=smart&auto=webp&s=169cf23f2eafb8bbbea41d956fa5c80d3d626abb', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qO_yWM-oX79T91KWCCGDOTp0wFmH6pCY3nanZz4-lh4.jpg?auto=webp&s=ba474feabdd5fb3b0c6e437847fde1ef1caba7b0', 'width': 1024}, 'variants': {}}]}
|
|
What would happen if you were to fine-tune a model on 3 entirely different datasets?
| 1 |
[removed]
| 2025-04-14T16:43:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3itd/what_would_happen_if_you_were_to_finetune_a_model/
|
christian7670
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3itd
| false | null |
t3_1jz3itd
|
/r/LocalLLaMA/comments/1jz3itd/what_would_happen_if_you_were_to_finetune_a_model/
| false | false |
self
| 1 | null |
Local LLaMA workflows: anyone snapshotting models to load on-demand?
| 0 |
[removed]
| 2025-04-14T16:44:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3jc1/local_llama_workflows_anyone_snapshotting_models/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3jc1
| false | null |
t3_1jz3jc1
|
/r/LocalLLaMA/comments/1jz3jc1/local_llama_workflows_anyone_snapshotting_models/
| false | false |
self
| 0 | null |
Hi
| 1 |
[deleted]
| 2025-04-14T16:45:00 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3jre
| false | null |
t3_1jz3jre
|
/r/LocalLLaMA/comments/1jz3jre/hi/
| false | false |
default
| 1 | null |
||
Running 50+ LLMs per GPU with sub-5s snapshot load times — anyone exploring model scheduling like this for LLaMA models?
| 1 |
[removed]
| 2025-04-14T16:50:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3o66/running_50_llms_per_gpu_with_sub5s_snapshot_load/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3o66
| false | null |
t3_1jz3o66
|
/r/LocalLLaMA/comments/1jz3o66/running_50_llms_per_gpu_with_sub5s_snapshot_load/
| false | false |
self
| 1 | null |
QGen Studio: An Adaptive Question-Answer Generation, Training and Evaluation Platform
| 1 |
[removed]
| 2025-04-14T16:51:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3p4c/qgen_studio_an_adaptive_questionanswer_generation/
|
nlpeople
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3p4c
| false | null |
t3_1jz3p4c
|
/r/LocalLLaMA/comments/1jz3p4c/qgen_studio_an_adaptive_questionanswer_generation/
| false | false |
self
| 1 | null |
Beware: Music is now also being poisoned
| 1 | 2025-04-14T16:53:47 |
https://www.youtube.com/watch?v=xMYm2d9bmEA
|
MaruluVR
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3rie
| false |
{'oembed': {'author_name': 'Benn Jordan', 'author_url': 'https://www.youtube.com/@BennJordan', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/xMYm2d9bmEA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The Art Of Poison-Pilling Music Files"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/xMYm2d9bmEA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The Art Of Poison-Pilling Music Files', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1jz3rie
|
/r/LocalLLaMA/comments/1jz3rie/beware_music_is_now_also_being_poisoned/
| false | false |
default
| 1 | null |
|
DDR4 vs. DDR5 for fine-tuning (4x3090)
| 14 |
I'm building a fine-tuning capable system and I can't find any info. How important is CPU RAM speed for fine-tuning? I've looked at Geohot's Tinybox and they use dual CPU with DDR5. Most of the other training-focused builds use DDR5.
DDR5 is quite expensive, almost double DDR4. Also, Rome/Milan based CPU's are cheaper than Genoa and newer, albeit not that much. Most of the saving would be in the RAM.
How important are RAM speeds for training? I know that inference is VRAM bound, so I'm not planning to do CPU based inference (beyond simple tests/PoCs).
| 2025-04-14T16:55:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3syk/ddr4_vs_ddr5_for_finetuning_4x3090/
|
Traditional-Gap-3313
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3syk
| false | null |
t3_1jz3syk
|
/r/LocalLLaMA/comments/1jz3syk/ddr4_vs_ddr5_for_finetuning_4x3090/
| false | false |
self
| 14 | null |
Anyone snapshotting local LLaMA models for fast swap-in/swap-out?
| 1 |
Just following up on my earlier post .
we’ve been testing a way to pause and resume LLaMA models locally with ~2s load times.
Feels kind of like process scheduling: start, pause, resume , instead of keeping everything loaded in memory.
Curious if anyone else is optimizing local setups like this?
| 2025-04-14T16:56:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3tkv/anyone_snapshotting_local_llama_models_for_fast/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3tkv
| false | null |
t3_1jz3tkv
|
/r/LocalLLaMA/comments/1jz3tkv/anyone_snapshotting_local_llama_models_for_fast/
| false | false |
self
| 1 | null |
Opinion: Tunnel vision is a threat to further innovation
| 10 |
# Where this all started at
Earlier today I stumbled upon [this tweet](https://x.com/rasbt/status/1911494805101986135) where a ML researcher describes a logic flaw in the Proximal Policy Optimization algorithm which basically boils down to negative rewards diluting their impact across the token length of a response, which naturally caused LLMs to adopt pointlessly (for the end-user) longer responses to ensure wrong answers were given lower overall penalties.
As better explained by Sebastian Raschka:
>What does the response length have to do with the loss? When the reward is negative, longer responses can dilute the penalty per individual token, which results in lower (i.e., better) loss values (even though the model is still getting the answer wrong).
When I read this, I was in shock. PPO came out in 2017 and reasoning models have been common for many months. How is it possible that companies worth over *4 billion dollars* with thousands of employees failed to catch such a simple and clearly obvious flaw in the logic of the algorithms they entrust their market evaluations upon?
# Game Design 101
The aforementioned issue is what we would call in game design "optimizing the fun out of a game", that is to say, when the reward structure of the game encourages players to play in a way that is unfun.
For example, you might have a movement shooter where the fun is in jumping around guns blazing at the thrill of the moment, but, because (insert resource here, health, ammo, save slots) are limited and enemies are punishing, what ends up happening is that the game encourages players to instead play slow and methodically, draining the fun out of the game. The same concept can be applied here, both humans (as shown by experiments using signal noise to condition the responses of neurons) and machine learning algorithms ultimately both seek to gain the system to maximize positive signals and minimize negative ones.
Game Designers should never blame the player for trying to gain the system, but rather hold themselves accountable for failing to design a game that rewards what is fun and punishes what is not. The same goes for ML algorithms, the fault lies entirely in those that failed to trace the logic and ensure there were no exploits to it.
Now that we've established that even game designers (the lowest of the low) can figure out what's wrong, what does that tell us about these multi-billion corporations that seemingly failed to catch these important issues?
# Hype Moments, Aura Farming, And Tunnel Vision
Sam Altman and others like him spent their time "aura farming" (building a cult of personality) so they can get venture capitalists to fund their "hype moments" (buying 10000 Nvidia GPUs and feeding it all of Z-Library and Reddit).
These companies think in Key Performance Indicators and budget numbers, they think that with enough processing power and engineers they can brute force their way into the next ML breakthrough. But that's just not a good approach.
When your entire team is composed of engineers (and good-for-nothing marketers), you end up directing a project with tunnel vision, unable to see any solution outside of the periphery of shoving more money down Jensen Huang's throat. In the end, this will just result in needlesly high expenses (with their associated environmental issues) all for ever-increasing diminishing returns.
Western companies are so focused on crunching the math and the immediate technical aspects that they entirely forget about the art and underlying design necessary to hold everything together. Like an aeroplane company that places all their resources on ever increasingly more powerful jet engines without ever bothering to check with designers to see if the wings would need adjustment, or with material scientists to ensure their fuselage can even handle the stress.
# 中国世纪
On the other hand, you've got people like Liang Wenfeng from DeepSeek, who understand the value of skillset diversity. You still need qualified engineers, but you also need to be able to think outside the box. Improving what already exists is worthless in the abstract realm of algorithms, there's no reason to refine something when there still exists possible alternatives that could supersede it.
We used to have something similar in the AAA industry, where companies focused too much on hiring general developers to help shorten release cycles, and stuck to only ever refining existing game design formulas. Eventually, the diminishing returns brought them back to their senses and back into very slight innovation.
I doubt that DeepSeek has any game theorists or whatever working at their company, but I am certain that they probably have a lot more people than their western counterparts thinking about the surrounding details of their models (Multi-Head Latent Attention comes to mind as an example) and focusing on "non-let's-throw-more-GPUs-at-the-problem" innovation.
Diverse skillsets that KPIs can't make use of avoid tunnel vision, and a pressure-free environment far away from the board of directors nourishes innovation. Right now it seems like western companies are lacking in either (or both) of these departments, much to everyone's detriment.
# Conclusion
Even though our industries are very different, as a game developer, I certainly know what it's like to see successful studios and projects crushed for the sake of appeasing shareholders that are so short-sighted they can't see their own nose.
| 2025-04-14T17:02:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz3ytd/opinion_tunnel_vision_is_a_threat_to_further/
|
HugoCortell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz3ytd
| false | null |
t3_1jz3ytd
|
/r/LocalLLaMA/comments/1jz3ytd/opinion_tunnel_vision_is_a_threat_to_further/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'NdTxN-vqOlprqohPhnzkp8eVJlO7c-eddWcHKKF08Rg', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=108&crop=smart&auto=webp&s=aa5e44d734c18e52afc4daedcc6e131cca6ddd2d', 'width': 108}, {'height': 167, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=216&crop=smart&auto=webp&s=1c6b1c20be04aa1c8a8ea7d8aa24752fa4257218', 'width': 216}, {'height': 248, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=320&crop=smart&auto=webp&s=1bebdd398067160a57ca830fb402dc32c928c373', 'width': 320}, {'height': 496, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=640&crop=smart&auto=webp&s=2ccb18665f9d05301df1b424fae97fe80ff7236e', 'width': 640}, {'height': 745, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=960&crop=smart&auto=webp&s=fb47c7e9535c2bbaa98ee7ccdcf4c8249fe548e0', 'width': 960}, {'height': 838, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?width=1080&crop=smart&auto=webp&s=52b39ceba5b49563b5d01c6a5fc45af86818c67c', 'width': 1080}], 'source': {'height': 868, 'url': 'https://external-preview.redd.it/L5b4FDTHISRVloy99Z0CBsvbwK1hGsPHnzEyAU7X0og.jpg?auto=webp&s=b7fe084645a6c475c542bf411a8e0455c5ecfa09', 'width': 1118}, 'variants': {}}]}
|
Drummer's Rivermind™ 12B v1, the next-generation AI that’s redefining human-machine interaction! The future is here.
| 117 |
[https://huggingface.co/TheDrummer/Rivermind-12B-v1-GGUF](https://huggingface.co/TheDrummer/Rivermind-12B-v1-GGUF)
| 2025-04-14T17:05:32 |
https://huggingface.co/TheDrummer/Rivermind-12B-v1
|
TheLocalDrummer
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz421n
| false | null |
t3_1jz421n
|
/r/LocalLLaMA/comments/1jz421n/drummers_rivermind_12b_v1_the_nextgeneration_ai/
| false | false |
default
| 117 |
{'enabled': False, 'images': [{'id': 'wRQJk24aLs8Cpm-sp_z-PYPowlXF6A2fCZ1ND4bWZiM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=108&crop=smart&auto=webp&s=0d3048ee4f325d112bcc16652ec93ed9302c5739', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=216&crop=smart&auto=webp&s=2e2bdc5cecb79777770808b486a491cc5ecc8fa7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=320&crop=smart&auto=webp&s=067c153a08f6502e10a1adee43bd8934b8bf6f3f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=640&crop=smart&auto=webp&s=0a7fd1aeef490d800d9fd946e2abe5ec70682948', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=960&crop=smart&auto=webp&s=85856779dce9a7dc7632e7216de180147f79363d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?width=1080&crop=smart&auto=webp&s=409706709c051787cc92b12b9138b32eda5e89e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TBod4kQesTgLYyjaFaYSK8iNonKB2zVdwF9pMmcEbgY.jpg?auto=webp&s=bd094bf17830acfc45413f0942b04440551ed459', 'width': 1200}, 'variants': {}}]}
|
OpenAI announces GPT-4.1 models and pricing
| 191 | 2025-04-14T17:06:23 |
https://platform.openai.com/docs/models/compare?model=gpt-4.1
|
Balance-
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz42rq
| false | null |
t3_1jz42rq
|
/r/LocalLLaMA/comments/1jz42rq/openai_announces_gpt41_models_and_pricing/
| false | false | 191 | null |
||
I'm about to ask GPT-4.1: Which do you think is bigger, GPT-4.1 or GPT-4.5?
| 23 |
Or are you guys really talking about GPT-4.10?
| 2025-04-14T17:12:30 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz4831
| false | null |
t3_1jz4831
|
/r/LocalLLaMA/comments/1jz4831/im_about_to_ask_gpt41_which_do_you_think_is/
| false | false |
default
| 23 |
{'enabled': True, 'images': [{'id': '0kyux96m1uue1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=108&crop=smart&auto=webp&s=e25438c35d24f38cd28a2372e3e4dcac5ac8a8f4', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=216&crop=smart&auto=webp&s=d1a726d7d055aca391989148468b322ffa47c090', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=320&crop=smart&auto=webp&s=02476e509f1bbf182ac66276c9a006e78438b41d', 'width': 320}, {'height': 395, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=640&crop=smart&auto=webp&s=fb8985a7bf40a9732f1c24c03b40e87ac3905667', 'width': 640}, {'height': 592, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=960&crop=smart&auto=webp&s=3cd028e714559764605dca2af4afd614f50c8bba', 'width': 960}, {'height': 666, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?width=1080&crop=smart&auto=webp&s=abfb6c3a0a42d33f8480633de305bd45b8e7b89f', 'width': 1080}], 'source': {'height': 920, 'url': 'https://preview.redd.it/0kyux96m1uue1.png?auto=webp&s=04bf8316d56023e84784fe760020ef03ede7d064', 'width': 1490}, 'variants': {}}]}
|
|
needle in a haystack result of ChatGPT 4.1 series.
| 1 |
[removed]
| 2025-04-14T17:12:33 |
internal-pagal
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz484e
| false | null |
t3_1jz484e
|
/r/LocalLLaMA/comments/1jz484e/needle_in_a_haystack_result_of_chatgpt_41_series/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'dz57te4b2uue1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/dz57te4b2uue1.png?width=108&crop=smart&auto=webp&s=88067532c535357b084ba4fc91ac077d8b139a6a', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/dz57te4b2uue1.png?width=216&crop=smart&auto=webp&s=7f310159eccbf05bfd66d79906dc6e2f279abf36', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/dz57te4b2uue1.png?width=320&crop=smart&auto=webp&s=48a79b007ff1520efe179702ebb635f9dab06798', 'width': 320}, {'height': 357, 'url': 'https://preview.redd.it/dz57te4b2uue1.png?width=640&crop=smart&auto=webp&s=fe440101f2911d92673a2b0e7026020102468ba7', 'width': 640}], 'source': {'height': 528, 'url': 'https://preview.redd.it/dz57te4b2uue1.png?auto=webp&s=dccaa622e1284438788421e68a07488a6e95015a', 'width': 944}, 'variants': {}}]}
|
|
GPT-4.1 Introduced!
| 0 |
[https://openai.com/index/gpt-4-1/](https://openai.com/index/gpt-4-1/)
I have to say, it is interesting that they are deprecating GPT-4.5 so early... perhaps it's true that o3-full/o4-mini are built on 4.1 instead of 4.5?
| 2025-04-14T17:20:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz4exu/gpt41_introduced/
|
fanboy190
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz4exu
| false | null |
t3_1jz4exu
|
/r/LocalLLaMA/comments/1jz4exu/gpt41_introduced/
| false | false |
self
| 0 | null |
Quasar Alpha = GPT-4.1
| 106 | 2025-04-14T17:29:09 |
Spirited_Salad7
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz4mqg
| false | null |
t3_1jz4mqg
|
/r/LocalLLaMA/comments/1jz4mqg/quasar_alpha_gpt41/
| false | false |
default
| 106 |
{'enabled': True, 'images': [{'id': 'urj2uow45uue1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/urj2uow45uue1.png?width=108&crop=smart&auto=webp&s=d77d0d213506dfc569df1ced182a7b3a4decc859', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/urj2uow45uue1.png?width=216&crop=smart&auto=webp&s=19f3d2d399d985306130b7d15da4835937ac3765', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/urj2uow45uue1.png?width=320&crop=smart&auto=webp&s=ba22ef5d0dc962df21020f29236fc4e0a7d2e8c4', 'width': 320}, {'height': 341, 'url': 'https://preview.redd.it/urj2uow45uue1.png?width=640&crop=smart&auto=webp&s=743d81bf34988394265f244939f1268d7a28bad0', 'width': 640}], 'source': {'height': 397, 'url': 'https://preview.redd.it/urj2uow45uue1.png?auto=webp&s=dbc57d18a28c14361d5f5fb4763e587077540eb6', 'width': 743}, 'variants': {}}]}
|
||
GPT 4.1 model positioning
| 0 | 2025-04-14T17:29:32 |
https://www.reddit.com/gallery/1jz4n2q
|
Balance-
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz4n2q
| false | null |
t3_1jz4n2q
|
/r/LocalLLaMA/comments/1jz4n2q/gpt_41_model_positioning/
| false | false | 0 | null |
||
the new LLM meta is watching tech influencers get one-shot by benchmark jpegs
| 103 | 2025-04-14T17:37:14 |
ForsookComparison
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz4tx9
| false | null |
t3_1jz4tx9
|
/r/LocalLLaMA/comments/1jz4tx9/the_new_llm_meta_is_watching_tech_influencers_get/
| false | false | 103 |
{'enabled': True, 'images': [{'id': 'qJQfE7EeYQxaDySUuTLnLkvE0YFFWSrwAWJOSbVC7D4', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/ku1z50vm6uue1.jpeg?width=108&crop=smart&auto=webp&s=bf4f16255ad9dd46632f7c76d6129f34dcb510a3', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/ku1z50vm6uue1.jpeg?width=216&crop=smart&auto=webp&s=8dc56aaa52bbd5996480d301adb32679d5ad1318', 'width': 216}, {'height': 387, 'url': 'https://preview.redd.it/ku1z50vm6uue1.jpeg?width=320&crop=smart&auto=webp&s=2e9ddf5705625eeb0faf0812fc229a578087ecdd', 'width': 320}], 'source': {'height': 606, 'url': 'https://preview.redd.it/ku1z50vm6uue1.jpeg?auto=webp&s=be57b465ff8a2596c9dd5e48f2642f2729acbb15', 'width': 500}, 'variants': {}}]}
|
|||
I'm 99,9% sure GPT 4.1 is Optimus Alpha
| 0 | 2025-04-14T17:44:36 |
sirjoaco
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz50fn
| false | null |
t3_1jz50fn
|
/r/LocalLLaMA/comments/1jz50fn/im_999_sure_gpt_41_is_optimus_alpha/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'aYp55P93S-meJ_HZphIA8nf3ZFshtwynLrm43FhSLFE', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=108&crop=smart&auto=webp&s=9be6b9c5620fe476284b1418fbdb7d70172948cf', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=216&crop=smart&auto=webp&s=17d4d41b525fcecc0c567fa1b9669316eaf8af56', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=320&crop=smart&auto=webp&s=692b231794a3c6dabb6b323c2ce91e6739026b4c', 'width': 320}, {'height': 398, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=640&crop=smart&auto=webp&s=19a59bffb9473e08ab1ebf8f2bd68bc603aa0262', 'width': 640}, {'height': 598, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=960&crop=smart&auto=webp&s=7ae3b6457a2d7f02e355b4e19c1ec48925ee7af3', 'width': 960}, {'height': 672, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?width=1080&crop=smart&auto=webp&s=46c8ef7132e0fa55677d70ca85f6743c13228244', 'width': 1080}], 'source': {'height': 1884, 'url': 'https://preview.redd.it/7qudb8748uue1.jpeg?auto=webp&s=ff3976fdd11b9f046e13b1ca4a1bdbb1dd890ba9', 'width': 3024}, 'variants': {}}]}
|
|||
Is there any way to do Agentic coding with a local LLM running on a 5090?
| 0 |
I've been searching, and not finding. Ideally, this would run in VS Code or Visual Studio 2022 Professional.
Thank you.
| 2025-04-14T17:45:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz51mu/is_there_any_way_to_do_agentic_coding_with_a/
|
stackoverbro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz51mu
| false | null |
t3_1jz51mu
|
/r/LocalLLaMA/comments/1jz51mu/is_there_any_way_to_do_agentic_coding_with_a/
| false | false |
self
| 0 | null |
OpenAI has released GPT-4.1, primarily aimed at developers. It finally supports a 1 million token context window, and they claim it's the most affordable model they've released so far
| 0 |
https://www.youtube.com/live/kA-P9ood-cE?si=opOUbbajf8IXmRbF
| 2025-04-14T18:03:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz5gyw/openai_has_released_gpt41_primarily_aimed_at/
|
WriedGuy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz5gyw
| false | null |
t3_1jz5gyw
|
/r/LocalLLaMA/comments/1jz5gyw/openai_has_released_gpt41_primarily_aimed_at/
| false | false |
self
| 0 | null |
Which model listened to you the best
| 956 | 2025-04-14T18:04:29 |
iamnotdeadnuts
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz5i8h
| false | null |
t3_1jz5i8h
|
/r/LocalLLaMA/comments/1jz5i8h/which_model_listened_to_you_the_best/
| false | false | 956 |
{'enabled': True, 'images': [{'id': 'PCvWoz4IjxqZdxZKG3t2evNsMs_zwcJIjBGuPQYhB8g', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/r537cvlobuue1.jpeg?width=108&crop=smart&auto=webp&s=77c25ff7b74e636a4680eb7d001279b929d7eda5', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/r537cvlobuue1.jpeg?width=216&crop=smart&auto=webp&s=4593b0b55c7726c336e12d99ea814f2caaf1a462', 'width': 216}, {'height': 397, 'url': 'https://preview.redd.it/r537cvlobuue1.jpeg?width=320&crop=smart&auto=webp&s=0528fc9e36e4fb78aaab652b59d00f0c0c212e46', 'width': 320}, {'height': 794, 'url': 'https://preview.redd.it/r537cvlobuue1.jpeg?width=640&crop=smart&auto=webp&s=d53c87f9cd66558adbfaf7e405bbd3f001354427', 'width': 640}], 'source': {'height': 1071, 'url': 'https://preview.redd.it/r537cvlobuue1.jpeg?auto=webp&s=a3170a0ea5af436f39f10f2a1ca971100fe96028', 'width': 863}, 'variants': {}}]}
|
|||
LLMS/AI for stock market
| 0 |
How to go about using AI/ existing LLMs / genAI for the stock market ? For stock market predictions along with pairing it up with other indicators ?
| 2025-04-14T18:06:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz5jwj/llmsai_for_stock_market/
|
Basic-Pay-9535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz5jwj
| false | null |
t3_1jz5jwj
|
/r/LocalLLaMA/comments/1jz5jwj/llmsai_for_stock_market/
| false | false |
self
| 0 | null |
Llama 4 underperform on coding benchmark
| 1 |
[removed]
| 2025-04-14T18:20:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz5wmu/llama_4_underperform_on_coding_benchmark/
|
StableStack
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz5wmu
| false | null |
t3_1jz5wmu
|
/r/LocalLLaMA/comments/1jz5wmu/llama_4_underperform_on_coding_benchmark/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
DeepSeek V3's strong standing here makes you wonder what v4/R2 could achieve.
| 192 | 2025-04-14T18:26:32 |
mw11n19
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz624j
| false | null |
t3_1jz624j
|
/r/LocalLLaMA/comments/1jz624j/deepseek_v3s_strong_standing_here_makes_you/
| false | false | 192 |
{'enabled': True, 'images': [{'id': 'YU6hhRrCqWQDTEKhMQEEjkdv_P7hWlmFJzN6tGazb04', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?width=108&crop=smart&auto=webp&s=d011cf89f8b6ef5e40ccb6c3748c40cb81869c31', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?width=216&crop=smart&auto=webp&s=381a7b85d717c403b4e08f37f5d925b741f3989e', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?width=320&crop=smart&auto=webp&s=7ccc5e0e7e1d55731e789a864d756c5e1d0b0067', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?width=640&crop=smart&auto=webp&s=9761b09a84b56f8b9ad25c4ee42a925420e4fe96', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?width=960&crop=smart&auto=webp&s=37a54f4a7d0cb59c2817ec67f03aabb6d8272e41', 'width': 960}], 'source': {'height': 590, 'url': 'https://preview.redd.it/tlcxh6pffuue1.png?auto=webp&s=d07521771cbd925c3f934cd920136a2e2223dc38', 'width': 989}, 'variants': {}}]}
|
|||
GPT 4.1 nano vs Gemini 2.0 flash
| 2 |
The api pricing openai have placed their gpt 4.1 nano is the same as gemini 2.0 flash, and seeing how many people loved using the flash including me for small and easy coding tasks given its price is very cheap i am very curious how does the nano compare to the flash.
I know 4.1/quaser is really great for coding but the price is really double of that o3 mini which was disappointing to see ,What do you think about the nano and mini version of 4.1 ?
| 2025-04-14T18:27:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz6327/gpt_41_nano_vs_gemini_20_flash/
|
Snoo31053
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz6327
| false | null |
t3_1jz6327
|
/r/LocalLLaMA/comments/1jz6327/gpt_41_nano_vs_gemini_20_flash/
| false | false |
self
| 2 | null |
AMD W7900 + RTX 3090: Is it worth it?
| 1 |
[removed]
| 2025-04-14T18:40:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1jz6f5f/amd_w7900_rtx_3090_is_it_worth_it/
|
GenLabsAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz6f5f
| false | null |
t3_1jz6f5f
|
/r/LocalLLaMA/comments/1jz6f5f/amd_w7900_rtx_3090_is_it_worth_it/
| false | false |
self
| 1 | null |
Nvidia is right, 5070 IS better than 4090.
| 1 |
[removed]
| 2025-04-14T18:47:15 |
P0IS0N_GOD
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jz6lgj
| false | null |
t3_1jz6lgj
|
/r/LocalLLaMA/comments/1jz6lgj/nvidia_is_right_5070_is_better_than_4090/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'fw9yAYPHZrdZiZQ9CaSSUQp1aa2-EPXIouaS9mczUsM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=108&crop=smart&auto=webp&s=1116fac1b8184403f96ea43b346d72979e53c6c6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=216&crop=smart&auto=webp&s=0fd1d0d927b192d3a00476469fa42d40a6169f5a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=320&crop=smart&auto=webp&s=a89fe76ccde6e8b5d2b8f3b6c16c8e46b6303916', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=640&crop=smart&auto=webp&s=ec1ec2d36ea2ee015ca5ce6c3857f38f5cf6d9f3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=960&crop=smart&auto=webp&s=05aa516bdddd06d022e4e1bb6aa827f57fd7f4e6', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?width=1080&crop=smart&auto=webp&s=303da482f998a216a5610664138a998a0e9bf36a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/cdsb8k7bjuue1.png?auto=webp&s=a6107f682bdc6c8e95c7f258c5e26c6933ede7e6', 'width': 1080}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.