title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Llama 4 announced
| 105 |
Link: https://www.llama.com/llama4/
| 2025-04-05T18:43:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsafqw/llama_4_announced/
|
nderstand2grow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsafqw
| false | null |
t3_1jsafqw
|
/r/LocalLLaMA/comments/1jsafqw/llama_4_announced/
| false | false |
self
| 105 | null |
With no update in 4 months, livebench was getting saturated and benchmaxxed, so I'm really looking forward to this one.
| 80 |
Link to tweet: [https://x.com/bindureddy/status/1908296208025870392](https://x.com/bindureddy/status/1908296208025870392)
| 2025-04-05T18:45:09 |
jd_3d
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsagyr
| false | null |
t3_1jsagyr
|
/r/LocalLLaMA/comments/1jsagyr/with_no_update_in_4_months_livebench_was_getting/
| false | false | 80 |
{'enabled': True, 'images': [{'id': 'GP_naSi7Wdqa0aE6zTTlePJyNgyew5MUSKT41CJDPsw', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/9tclfgid92te1.png?width=108&crop=smart&auto=webp&s=27330af4d57e92fcceacf511d15b2987928a1b6e', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/9tclfgid92te1.png?width=216&crop=smart&auto=webp&s=3594bc479e8daf9e5e493a7a98b07ba94eef63c8', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/9tclfgid92te1.png?width=320&crop=smart&auto=webp&s=277ab988a5a396735a74b80fcdd3d4b467c43f25', 'width': 320}], 'source': {'height': 358, 'url': 'https://preview.redd.it/9tclfgid92te1.png?auto=webp&s=e78cf82e182352f88618aef74eb156d4b085c206', 'width': 610}, 'variants': {}}]}
|
||
Llama 4 is here
| 448 | 2025-04-05T18:46:20 |
https://www.llama.com/docs/model-cards-and-prompt-formats/llama4_omni/
|
jugalator
|
llama.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsahy4
| false | null |
t3_1jsahy4
|
/r/LocalLLaMA/comments/1jsahy4/llama_4_is_here/
| false | false |
default
| 448 | null |
|
The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation
| 59 | 2025-04-05T18:51:11 |
https://ai.meta.com/blog/llama-4-multimodal-intelligence/
|
Ill-Association-8410
|
ai.meta.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsalxn
| false | null |
t3_1jsalxn
|
/r/LocalLLaMA/comments/1jsalxn/the_llama_4_herd_the_beginning_of_a_new_era_of/
| false | false | 59 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
||
Mark presenting four Llama 4 models, even a 2 trillion parameters model!!!
| 2,418 |
source from his instagram page
| 2025-04-05T18:52:08 |
https://v.redd.it/7bgnzhtxb2te1
|
LarDark
|
/r/LocalLLaMA/comments/1jsampe/mark_presenting_four_llama_4_models_even_a_2/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsampe
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7bgnzhtxb2te1/DASHPlaylist.mpd?a=1746600735%2CNWU5MzQxYmQzOWZkZGM4ZjU3MjZmMWNkZDM4NzNhYThhNmRhMzY2YWYzNmRkZTdiMTBkNTVlYjczYzc0Y2QyNg%3D%3D&v=1&f=sd', 'duration': 138, 'fallback_url': 'https://v.redd.it/7bgnzhtxb2te1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/7bgnzhtxb2te1/HLSPlaylist.m3u8?a=1746600735%2COTY0NWY2YmExMjU5MjI1ZTJkMGY5YmQzOTRlNjU2M2IxNjU2M2ViM2Y1ZTk1ZGY2NzBmYWU5NDg0NWNlZjYzNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7bgnzhtxb2te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1jsampe
|
/r/LocalLLaMA/comments/1jsampe/mark_presenting_four_llama_4_models_even_a_2/
| false | false | 2,418 |
{'enabled': False, 'images': [{'id': 'Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?width=108&crop=smart&format=pjpg&auto=webp&s=b7cffa556cbce0f424929424c553b581a2646032', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?width=216&crop=smart&format=pjpg&auto=webp&s=5ee9b622a77b6ed8d8699a567dd73ba44ff7aae2', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee28e8c79fd4316fe251e015ac11ef32f0116933', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?width=640&crop=smart&format=pjpg&auto=webp&s=92dea81cd31b590804421b543526cb96395e5c2f', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?width=960&crop=smart&format=pjpg&auto=webp&s=e80c2044f6c63fad9e7396b048f349706441e681', 'width': 960}], 'source': {'height': 1737, 'url': 'https://external-preview.redd.it/Z3p2aHZudXhiMnRlMYW4H8xHgtzR3pjuficV95KktJ2KVETiew0YUMQL020k.png?format=pjpg&auto=webp&s=199f7992a940419d0c0850f9fb12e1fcf3f4c92c', 'width': 977}, 'variants': {}}]}
|
|
Can anyone have GGUF file of this model?
| 1 |
Hi, I want to use Guilherme34's Llama-3.2-11b-vision-uncensored on LM Studio, but as you know, LM Studio only accepts GGUF files, but I can't find an uncensored vision model on Hugging Face... This is the only model I could find, but it's a SafeTensor. Has anyone converted this before or another uncensored vision model as GGUF? Thanks in advance.
Model Link: [https://huggingface.co/Guilherme34/Llama-3.2-11b-vision-uncensored/tree/main](https://huggingface.co/Guilherme34/Llama-3.2-11b-vision-uncensored/tree/main)
| 2025-04-05T18:56:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsaqg7/can_anyone_have_gguf_file_of_this_model/
|
enessedef
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsaqg7
| false | null |
t3_1jsaqg7
|
/r/LocalLLaMA/comments/1jsaqg7/can_anyone_have_gguf_file_of_this_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Qi66qRuBZN3hIk7rHFZ1iiEQuSLUQrjTXdDnXa7ergY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=108&crop=smart&auto=webp&s=d870dc4d3a3b76a9e98839a5c939efb495c1fa78', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=216&crop=smart&auto=webp&s=e6a73d684a8f3396a9cf74096d509f41f208eae8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=320&crop=smart&auto=webp&s=dc8a0c5db727d13cfe7d1b5985aa158e1480c52e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=640&crop=smart&auto=webp&s=2ff4c28ee68d883b1bc06906d495f9c37feaa2a7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=960&crop=smart&auto=webp&s=51848c8776c509dbe37b6b7c7be993275bcb0c06', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?width=1080&crop=smart&auto=webp&s=137045d29a4d2029eeb2643a8e498fe6cf7eb198', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LwUNiW0bDsZm1beSPgYawOvheP6xUUZz9dR_Gs-6RaA.jpg?auto=webp&s=8ed4aba404f021f74fd1fe4e48ea9db135ade65b', 'width': 1200}, 'variants': {}}]}
|
Llama 4 Scout and Maverick Benchmarks
| 13 | 2025-04-05T19:01:35 |
Lankonk
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsauok
| false | null |
t3_1jsauok
|
/r/LocalLLaMA/comments/1jsauok/llama_4_scout_and_maverick_benchmarks/
| false | false | 13 |
{'enabled': True, 'images': [{'id': 'N3AXW6bWHsFuj5M2V4GQ8RbkSTKZ2VMmZl0_vjK4oUk', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=108&crop=smart&auto=webp&s=08caf51c64a43a09d59e2ea8c7a6d1848e5a9936', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=216&crop=smart&auto=webp&s=b1b2eab7e631c07871b91196a56028c98dba8f99', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=320&crop=smart&auto=webp&s=ec579785a5c316be015d0162b2666baecf33761a', 'width': 320}, {'height': 516, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=640&crop=smart&auto=webp&s=d2e8a44583f2836f2d9a88b833d4354d42a6e292', 'width': 640}, {'height': 775, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=960&crop=smart&auto=webp&s=557d3de03196623faeae78d41195c3fa2b02c63b', 'width': 960}, {'height': 872, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?width=1080&crop=smart&auto=webp&s=4349fbeb91b70382f2e46925c63cbfd3896c2b17', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://preview.redd.it/kjhb7icjd2te1.png?auto=webp&s=aab8704afa3e84ec22ca9093d82c03dd9dfed709', 'width': 1384}, 'variants': {}}]}
|
|||
LLAMA 4: I AM BEYOND DISSAPOINTED
| 0 |
I can't even begin to describe how utterly disappointed I am with Meta AI's new Llama 4 release. All of my hopes and anticipation with this model has been completely thrown out the window. Not only is this model not up to par with the latest SOTA LLM models, they didn't even release a model where you can run it on a normal computer. How am I supposed to run a 100 - 400B parameter model on a consumer grade GPU?Llama 4 was supposed to be revolutionary for the entire AI world but I guess I was mistaken. I hope and pray that Meta releases a 8B model like they did with Llama 3 and 2 because this is just heartbreaking.
| 2025-04-05T19:03:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsaw2y/llama_4_i_am_beyond_dissapointed/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsaw2y
| false | null |
t3_1jsaw2y
|
/r/LocalLLaMA/comments/1jsaw2y/llama_4_i_am_beyond_dissapointed/
| false | false |
self
| 0 | null |
Llama 4 Benchmarks
| 626 | 2025-04-05T19:04:21 |
Ravencloud007
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsax3p
| false | null |
t3_1jsax3p
|
/r/LocalLLaMA/comments/1jsax3p/llama_4_benchmarks/
| false | false | 626 |
{'enabled': True, 'images': [{'id': 'CrFoOGOdCVlD6XXZ4VVgb3I24c8tl_S8DZTcpC2xHBg', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=108&crop=smart&auto=webp&s=9c04a28cd3d28ff530bf87e58f89fc79d9c91883', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=216&crop=smart&auto=webp&s=571038f65f220c9980b82e8a620b0576d7b339ed', 'width': 216}, {'height': 296, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=320&crop=smart&auto=webp&s=d4a262da975a1f0c05736015e74cd2e00f2aeb80', 'width': 320}, {'height': 592, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=640&crop=smart&auto=webp&s=01928d53f0ef81a88115f299ef15628aacc38783', 'width': 640}, {'height': 888, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=960&crop=smart&auto=webp&s=f00cbaeb30d98e27ab37ab383b74a6be08b1f486', 'width': 960}, {'height': 1000, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?width=1080&crop=smart&auto=webp&s=17ae8c2cd21e76b327e3da0936e64c62ccd20fa2', 'width': 1080}], 'source': {'height': 3793, 'url': 'https://preview.redd.it/o2cd1y15e2te1.jpeg?auto=webp&s=10dd0f4eb1b6d80ef970d8c94e0154447f08f8f4', 'width': 4096}, 'variants': {}}]}
|
|||
Llama 4 Reasoning
| 34 |
It's coming!
| 2025-04-05T19:05:58 |
https://www.llama.com/llama4-reasoning-is-coming/
|
Current-Strength-783
|
llama.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsayj9
| false | null |
t3_1jsayj9
|
/r/LocalLLaMA/comments/1jsayj9/llama_4_reasoning/
| false | false | 34 |
{'enabled': False, 'images': [{'id': 'e8GUrJdaVCxG5Eyd44ENO0cM7JdqH8kDUSnwsfalAMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=108&crop=smart&auto=webp&s=f0285ca9be8f3d72f4b6c6e511c513027b450cb0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=216&crop=smart&auto=webp&s=86028dfb06f6800dc82a87e7b5ef6e4e9ae19560', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=320&crop=smart&auto=webp&s=48eebcaa6578c15128e0864524a1a48a3d48cabe', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=640&crop=smart&auto=webp&s=44af8b7574c0a4b26360d529db34c1b06ffcafcc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=960&crop=smart&auto=webp&s=83b7f2f81dbb96f112b8723ce89beb6c85b02cdc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=1080&crop=smart&auto=webp&s=bc0212d6318aa3200665d08a65fc79248cb26d1d', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?auto=webp&s=8267a6b34718fe688e0d82c662b4d40cc72ea47d', 'width': 2400}, 'variants': {}}]}
|
|
Damn 10 million ?? Will it be open source ?
| 0 | 2025-04-05T19:13:29 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb4nt
| false | null |
t3_1jsb4nt
|
/r/LocalLLaMA/comments/1jsb4nt/damn_10_million_will_it_be_open_source/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '6Xo2ke9ysUNGYpQ5ls_-6ycO0w0bQ6_M-kHwm8Gv7kw', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=108&crop=smart&auto=webp&s=a40280c3489756ee57acde7588cba191f2bcd4f7', 'width': 108}, {'height': 225, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=216&crop=smart&auto=webp&s=ea7e67e2a6012891e0ac6c024d550dcdbb8d9a33', 'width': 216}, {'height': 334, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=320&crop=smart&auto=webp&s=62c12f69b579003b6b64a2d962c862d163b9263a', 'width': 320}, {'height': 668, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=640&crop=smart&auto=webp&s=cb4466529435ea751e389ce1201580557cee62fb', 'width': 640}, {'height': 1002, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=960&crop=smart&auto=webp&s=211122fd9e2559694c4999218d0fa4c18993d9f7', 'width': 960}, {'height': 1128, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?width=1080&crop=smart&auto=webp&s=a70f80fe3e477aed802f8d1621443d0c2d782d88', 'width': 1080}], 'source': {'height': 1128, 'url': 'https://preview.redd.it/3pnln9fsf2te1.jpeg?auto=webp&s=d7ce8ad14e6d62f0436db07e875f76c0a6874b17', 'width': 1080}, 'variants': {}}]}
|
|||
When will a smaller version of Llama 4 be released?
| 0 |
Do you guys know if a smaller version of Llama 4 will ever be released? Preferably a 8b - 12b parameter model that can fit on most consumer hardware? Thanks.
| 2025-04-05T19:14:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsb5qa/when_will_a_smaller_version_of_llama_4_be_released/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb5qa
| false | null |
t3_1jsb5qa
|
/r/LocalLLaMA/comments/1jsb5qa/when_will_a_smaller_version_of_llama_4_be_released/
| false | false |
self
| 0 | null |
Llama 4 Scout on single GPU?
| 27 |
Zuck just said that Scout is designed to run on a single GPU, but how? It's an MoE model, if I'm correct. But you still need to store all the experts somewhere first.
| 2025-04-05T19:15:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsb5zz/llama_4_scout_on_single_gpu/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb5zz
| false | null |
t3_1jsb5zz
|
/r/LocalLLaMA/comments/1jsb5zz/llama_4_scout_on_single_gpu/
| false | false |
self
| 27 | null |
meta-llama/Llama-4-Scout-17B-16E · Hugging Face
| 16 | 2025-04-05T19:16:03 |
https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb6tg
| false | null |
t3_1jsb6tg
|
/r/LocalLLaMA/comments/1jsb6tg/metallamallama4scout17b16e_hugging_face/
| false | false | 16 |
{'enabled': False, 'images': [{'id': 'rgy8ILrT-HdzWxvDIeBW9bx3Ap0lVrSbbrlg_dGN308', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=108&crop=smart&auto=webp&s=3dd762d302222f42af4e6af417d2de6ffc641e35', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=216&crop=smart&auto=webp&s=0d43eafb2810b3789f35605e9b9dc61a879427fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=320&crop=smart&auto=webp&s=2d50f4cf98e7a81ca8518ea69990185e8e1bc9e6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=640&crop=smart&auto=webp&s=3d19d6e5de8080d81a71859d3e4f79759a895556', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=960&crop=smart&auto=webp&s=d455c0c5f6a8c56ac96355b444a82f30897e2d3e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?width=1080&crop=smart&auto=webp&s=f2d4e7ca86f5689f5dad654b0decbab79f62c62b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/aAwhMYmYLxpIEKVTHqURj58vifOeX8cfWgxkG95r0q0.jpg?auto=webp&s=e86705a97263ba9e082f0426fa0303fbd7bea0db', 'width': 1200}, 'variants': {}}]}
|
||
Do you think Llama 4 will have a 10 Million Token Context Window?
| 0 |
Yesterday this would have been a shitpost; today, the answer is yes. What in the acceleration.
> Check out Llama 4 Scout:
> https://www.llama.com
*Haven’t looked at the other models yet though, so if someone who has can comment a summary that would be greatly appreciated.*
| 2025-04-05T19:16:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsb7a2/do_you_think_llama_4_will_have_a_10_million_token/
|
xRolocker
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb7a2
| false | null |
t3_1jsb7a2
|
/r/LocalLLaMA/comments/1jsb7a2/do_you_think_llama_4_will_have_a_10_million_token/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'e8GUrJdaVCxG5Eyd44ENO0cM7JdqH8kDUSnwsfalAMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=108&crop=smart&auto=webp&s=f0285ca9be8f3d72f4b6c6e511c513027b450cb0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=216&crop=smart&auto=webp&s=86028dfb06f6800dc82a87e7b5ef6e4e9ae19560', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=320&crop=smart&auto=webp&s=48eebcaa6578c15128e0864524a1a48a3d48cabe', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=640&crop=smart&auto=webp&s=44af8b7574c0a4b26360d529db34c1b06ffcafcc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=960&crop=smart&auto=webp&s=83b7f2f81dbb96f112b8723ce89beb6c85b02cdc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=1080&crop=smart&auto=webp&s=bc0212d6318aa3200665d08a65fc79248cb26d1d', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?auto=webp&s=8267a6b34718fe688e0d82c662b4d40cc72ea47d', 'width': 2400}, 'variants': {}}]}
|
Llama 4 is here
| 1 |
[removed]
| 2025-04-05T19:16:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsb7an/llama_4_is_here/
|
Top-Victory3188
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb7an
| false | null |
t3_1jsb7an
|
/r/LocalLLaMA/comments/1jsb7an/llama_4_is_here/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '2YT0SvQcsZX7yPpE1JzWTZ0FFHHTviheZlz8D1CRjyw', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/P_PD5y_8grfpGFldbiTsFSfnz_1IDdKz_4sCAqWKZLE.jpg?width=108&crop=smart&auto=webp&s=dd4668f17494918163c55d5591e71aca8aeef07d', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/P_PD5y_8grfpGFldbiTsFSfnz_1IDdKz_4sCAqWKZLE.jpg?width=216&crop=smart&auto=webp&s=7f4d8157201da247ae1369e6203a351baab1a49f', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/P_PD5y_8grfpGFldbiTsFSfnz_1IDdKz_4sCAqWKZLE.jpg?width=320&crop=smart&auto=webp&s=986acb0c0903058300c446c6eb29bdbf023f4241', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/P_PD5y_8grfpGFldbiTsFSfnz_1IDdKz_4sCAqWKZLE.jpg?auto=webp&s=1d076ebb48288d838897e4d69ddc06a73fcbcb91', 'width': 360}, 'variants': {}}]}
|
|
Meta cooked with Llama 4
| 0 | 2025-04-05T19:18:54 |
TechNerd10191
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb96w
| false | null |
t3_1jsb96w
|
/r/LocalLLaMA/comments/1jsb96w/meta_cooked_with_llama_4/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'OzIPS39twT1S8Kq-9jVT4DYZZp2ay9_kbXdAFZx71G0', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=108&crop=smart&auto=webp&s=a7b1cb20566ccaf271ea2c6d59e6fa97036dc30d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=216&crop=smart&auto=webp&s=150919643e7ad9e9d7ccae8573c25acb8ffa0b41', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=320&crop=smart&auto=webp&s=138b03547c2142802fc030261b9f6d7e17d6baec', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=640&crop=smart&auto=webp&s=25580d2177b2acb57ba643f9845711920a3cdcb8', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=960&crop=smart&auto=webp&s=7b9d6d2fffd5c00ebf139136f2a8d8d1515bf6c3', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?width=1080&crop=smart&auto=webp&s=3ca25d9d1f2d2191a0e1f0ca43e741cb2b392c64', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/e8joycqhg2te1.png?auto=webp&s=2d6d3d2e54311c8dcaeffa486481be8f05232dd4', 'width': 1920}, 'variants': {}}]}
|
|||
Llama 4 - a meta-llama Collection
| 23 | 2025-04-05T19:19:29 |
https://huggingface.co/collections/meta-llama/llama-4-67f0c30d9fe03840bc9d0164
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsb9n4
| false | null |
t3_1jsb9n4
|
/r/LocalLLaMA/comments/1jsb9n4/llama_4_a_metallama_collection/
| false | false | 23 |
{'enabled': False, 'images': [{'id': 'faDqbNf008vF-Qo2zCHaZYnD36QHm9SmS-AhB4FXnQM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=108&crop=smart&auto=webp&s=06535a730079ae99b63f95984836aacaf0322539', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=216&crop=smart&auto=webp&s=6a970740d25d5fcd66e3c96205f702982a59376c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=320&crop=smart&auto=webp&s=402d7dd60b980829213a2f4cdfc8cb8e57d86c69', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=640&crop=smart&auto=webp&s=d5fbffa5bf860601638f9d56363c1a47bc1eddc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=960&crop=smart&auto=webp&s=56242c214ed6527f689d794c0385361a57fb05e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=1080&crop=smart&auto=webp&s=b91b69c8f4bce2e0629365184e5b180a994498c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?auto=webp&s=e7b9f15f74c9a24c4b841b316b091bd44ed17f88', 'width': 1200}, 'variants': {}}]}
|
||
Feedback Request - PC Build for Local LLMs, 4K Gaming, and Video Editing – Would Love Input on My Build!
| 1 |
[removed]
| 2025-04-05T19:23:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbcz1/feedback_request_pc_build_for_local_llms_4k/
|
TheBeardedNorth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbcz1
| false | null |
t3_1jsbcz1
|
/r/LocalLLaMA/comments/1jsbcz1/feedback_request_pc_build_for_local_llms_4k/
| false | false |
self
| 1 | null |
300MB-ish model that can do search?
| 11 |
I have written a telnet server for vintage computers that has a Wikipedia browser and an AI assistant: [wikipedia-live-telnet](https://github.com/ballerburg9005/wikipedia-live-telnet). The idea is that the assistant replaces a web browser and acts like MU-TH-UR in the Alien movie or like some sci-fi mainframe.
I am currently using smollm2:360m because it is fast enough to run on Oracle Cloud Forever Free Ampere A1 with just one core. It doesn't really produce useful output however, and it totally fails to use search.
I know that what I am asking for seems like asking for a wheelchair with 20 horse powers.
However I feel that if those micro models were simply fine-tuned to act more like agents it would improve the results dramatically, albeit they would still kind of suck, but for demo purposes or simply entertainment it could be sufficient. Maybe it could actually be useful if it was working in conjunction with some embedding model.
I tried common models in the 700MB and 1300MB range and they were extremely slow, especially if fed wikipedia articles. Also a few other 300MB models had the same issue.
What is the best model for the purpose?
| 2025-04-05T19:24:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbdh8/300mbish_model_that_can_do_search/
|
ballerburg9005
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbdh8
| false | null |
t3_1jsbdh8
|
/r/LocalLLaMA/comments/1jsbdh8/300mbish_model_that_can_do_search/
| false | false |
self
| 11 |
{'enabled': False, 'images': [{'id': 'pP0qGlui-4OcUlO0REMLa0tZk66A6unP3yvkpwV_dp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=108&crop=smart&auto=webp&s=633c2fd306d77d9555860af2c213eb28111a3320', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=216&crop=smart&auto=webp&s=15a60c5393618146b901dc774df01d1e70fb07d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=320&crop=smart&auto=webp&s=63e07d2b5f2d63d137400afe2c808f345fd83c0a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=640&crop=smart&auto=webp&s=0dd9499cc1cd435001c0bd428a079f19cb85fc25', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=960&crop=smart&auto=webp&s=b458442a8acbaeba9bbb4263f19792c0c42dd8fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?width=1080&crop=smart&auto=webp&s=4aa6279ce73deca31ebf180b22fa3f56a5642498', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5HDenWx39An8ljhZ9CnKcmHvRSAe_x0EtiZ0BxqoxYY.jpg?auto=webp&s=32ad4d2a31899ea13a73daa07d665cccbcd3ac73', 'width': 1200}, 'variants': {}}]}
|
Llama 4 benchmarks
| 158 | 2025-04-05T19:24:22 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbdm8
| false | null |
t3_1jsbdm8
|
/r/LocalLLaMA/comments/1jsbdm8/llama_4_benchmarks/
| false | false | 158 |
{'enabled': True, 'images': [{'id': '1LSRCV7CecEzEODpxDqVRFipfwMHzFh0ccMj-NRyxxk', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=108&crop=smart&auto=webp&s=1bb7486563312fdfb440b35455f74fc69bf9ab91', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=216&crop=smart&auto=webp&s=9f4073127b5bc18e05b34a20c0492daccb66b14b', 'width': 216}, {'height': 296, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=320&crop=smart&auto=webp&s=31d0781b9b3ee76e1874f565ae721bd6455c2cac', 'width': 320}, {'height': 592, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=640&crop=smart&auto=webp&s=ff22b91338fb54450168b9339d67ee62bd7a48ee', 'width': 640}, {'height': 888, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=960&crop=smart&auto=webp&s=8750135d7c5dcfa1e2b562afaa9431ec43ac5362', 'width': 960}, {'height': 1000, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?width=1080&crop=smart&auto=webp&s=0e076534086ad60d38e74b88817168c640f33338', 'width': 1080}], 'source': {'height': 3793, 'url': 'https://preview.redd.it/cl35fq7qh2te1.jpeg?auto=webp&s=d63531c855c8837625cfe53a2beacc12c79ab7fe', 'width': 4096}, 'variants': {}}]}
|
|||
Anyone else agonizing over upgrading hardware now or waiting until the next gen of AI optimized hardware comes out?
| 11 |
Part of me wants to buy now because I am worried that GPU prices are only going to get worse. Everything is already way overpriced.
But on the other side of it, what if i spent my budget for the next few years and then 8 months from now all the coolest LLM hardware comes out that is just as affordable but way more powerful?
I got $2500 burning a hole in my pocket right now. My current machine is just good enough to play around and learn but when I upgrade I can start to integrate LLMs into my professional life. Make work easier or maybe even push my career to the next level by showing that I know a decent amount about this stuff at a time when most people think its all black magic.
| 2025-04-05T19:26:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbfa4/anyone_else_agonizing_over_upgrading_hardware_now/
|
LanceThunder
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbfa4
| false | null |
t3_1jsbfa4
|
/r/LocalLLaMA/comments/1jsbfa4/anyone_else_agonizing_over_upgrading_hardware_now/
| false | false |
self
| 11 | null |
Llama reasoning soon and llama 4 behemoth
| 62 | 2025-04-05T19:31:35 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbjmm
| false | null |
t3_1jsbjmm
|
/r/LocalLLaMA/comments/1jsbjmm/llama_reasoning_soon_and_llama_4_behemoth/
| false | false | 62 |
{'enabled': True, 'images': [{'id': 'vz5U3KJCeGfe7FsmUvCC7xW7thnJybzAY8ZYhSCrgK0', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=108&crop=smart&auto=webp&s=332c2ee561627225916f4705a260ca0c7fe1aa76', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=216&crop=smart&auto=webp&s=c877c4f898e9a8e4d39f734c271816bca2e1b995', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=320&crop=smart&auto=webp&s=8c6afa2d548a23913b96bed1cb430074bbda9f4a', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=640&crop=smart&auto=webp&s=346d6c2a99dd5be293d57eccf5ee3cc161e4faea', 'width': 640}, {'height': 811, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=960&crop=smart&auto=webp&s=1e9239bef28ec07e4b80d4330636d12e32e435a4', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?width=1080&crop=smart&auto=webp&s=9dd991bb8d9c33b97052471bb32dae963561dcaf', 'width': 1080}], 'source': {'height': 913, 'url': 'https://preview.redd.it/m1tookk0j2te1.png?auto=webp&s=2212fdfeb772721ef48a91612eb9467b0e4ee790', 'width': 1080}, 'variants': {}}]}
|
|||
Turn local and private repos into prompts in one click with the gitingest VS Code Extension!
| 50 |
Hi all,
First of thanks to u/MrCyclopede for amazing work !!
Initially, I converted the his original Python code to TypeScript and then built the extension.
It's simple to use.
1. Open the Command Palette (`Ctrl+Shift+P` or `Cmd+Shift+P`)
2. Type "Gitingest" to see available commands:
* `Gitingest: Ingest Local Directory`: Analyze a local directory
* `Gitingest: Ingest Git Repository`: Analyze a remote Git repository
3. Follow the prompts to select a directory or enter a repository URL
4. View the results in a new text document
I’d love for you to check it out and share your feedback:
GitHub: [https://github.com/lakpahana/export-to-llm-gitingest](https://github.com/lakpahana/export-to-llm-gitingest) ( please give me a 🌟)
Marketplace: [https://marketplace.visualstudio.com/items?itemName=lakpahana.export-to-llm-gitingest](https://marketplace.visualstudio.com/items?itemName=lakpahana.export-to-llm-gitingest)
Let me know your thoughts—any feedback or suggestions would be greatly appreciated!
| 2025-04-05T19:31:44 |
https://v.redd.it/6s9t5n5gi2te1
|
Sanjuwa
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbjr3
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6s9t5n5gi2te1/DASHPlaylist.mpd?a=1746473522%2CMmE5YWZkYTcyYWI0Yjg3ZjBjZTEyZDc2MzlkMzVmZWIzOTBlZTFkMjYzMDA3YjQxNzk4ZTgzNTViYjRiMTg4OQ%3D%3D&v=1&f=sd', 'duration': 3, 'fallback_url': 'https://v.redd.it/6s9t5n5gi2te1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/6s9t5n5gi2te1/HLSPlaylist.m3u8?a=1746473522%2COTcwZDAzYjBiOWIyOGNhZjA2ZWZlYzAyZTk5NzNkMzU5MjM0MjgwMDI2ZDEzMTllYzg0ODNiMjRiMTFmMWQ2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6s9t5n5gi2te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1044}}
|
t3_1jsbjr3
|
/r/LocalLLaMA/comments/1jsbjr3/turn_local_and_private_repos_into_prompts_in_one/
| false | false | 50 |
{'enabled': False, 'images': [{'id': 'Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?width=108&crop=smart&format=pjpg&auto=webp&s=621d21118384492b0b8b3dd19896ca01294a36d9', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?width=216&crop=smart&format=pjpg&auto=webp&s=15d65e150230ac34606bc72b2da5279fcce17155', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?width=320&crop=smart&format=pjpg&auto=webp&s=72f24d0059bb37a1cbcde15e173eb497b856b320', 'width': 320}, {'height': 441, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?width=640&crop=smart&format=pjpg&auto=webp&s=a32f6e7a5a1439f37f273761ffd5bb4b469245f0', 'width': 640}, {'height': 662, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?width=960&crop=smart&format=pjpg&auto=webp&s=6913b58fdf370ff6277b6de15897411e4726a1bc', 'width': 960}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Zml5d2NtOGdpMnRlMVdzmEgr56-P0X2MdMr8l29_GfTj5L1NSLkdzC5Bz-eE.png?format=pjpg&auto=webp&s=52d79ebb0c4a7dddaed597592c9d5011eabbb519', 'width': 1044}, 'variants': {}}]}
|
|
Please, OpenRouter, consider making Llama 4's API free in the future!🥺🥺
| 1 |
Ain't not way i can run this locally for sureThere’s no way I can run this locally, for sure
| 2025-04-05T19:32:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbk7o/please_openrouter_consider_making_llama_4s_api/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbk7o
| false | null |
t3_1jsbk7o
|
/r/LocalLLaMA/comments/1jsbk7o/please_openrouter_consider_making_llama_4s_api/
| false | false |
self
| 1 | null |
Please, OpenRouter, consider making Llama 4's API free in the future🥺🥺
| 0 |
There’s no way I can run this locally, for sure.
| 2025-04-05T19:33:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbl5p/please_openrouter_consider_making_llama_4s_api/
|
internal-pagal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbl5p
| false | null |
t3_1jsbl5p
|
/r/LocalLLaMA/comments/1jsbl5p/please_openrouter_consider_making_llama_4s_api/
| false | false |
self
| 0 | null |
I'm building an open source claude desktop mcp alternative, looking for contributors !
| 1 |
[removed]
| 2025-04-05T19:33:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbl6b/im_building_an_open_source_claude_desktop_mcp/
|
unknownstudentoflife
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbl6b
| false | null |
t3_1jsbl6b
|
/r/LocalLLaMA/comments/1jsbl6b/im_building_an_open_source_claude_desktop_mcp/
| false | false |
self
| 1 | null |
No Audio Modality in Llama 4?
| 32 |
Does anyone know why there are no results for the 3 keywords (audio, speech, voice) in the Llama 4 blog post? [https://ai.meta.com/blog/llama-4-multimodal-intelligence/](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)
| 2025-04-05T19:40:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbqtj/no_audio_modality_in_llama_4/
|
rzvzn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbqtj
| false | null |
t3_1jsbqtj
|
/r/LocalLLaMA/comments/1jsbqtj/no_audio_modality_in_llama_4/
| false | false |
self
| 32 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
Anyone doing content moderation with LLM and trying to detect hate speech?
| 1 |
[removed]
| 2025-04-05T19:42:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbs4l/anyone_doing_content_moderation_with_llm_and/
|
Rich_Artist_8327
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbs4l
| false | null |
t3_1jsbs4l
|
/r/LocalLLaMA/comments/1jsbs4l/anyone_doing_content_moderation_with_llm_and/
| false | false |
self
| 1 | null |
Llama4 Scout downloading
| 85 |
Llama4 Scout downloading 😁👍
| 2025-04-05T19:42:45 |
TruckUseful4423
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbseu
| false | null |
t3_1jsbseu
|
/r/LocalLLaMA/comments/1jsbseu/llama4_scout_downloading/
| false | false | 85 |
{'enabled': True, 'images': [{'id': '-PlOxbc15Sccb1EeSRH_-q3eDein8fa7t-dbP2AuxAY', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=108&crop=smart&auto=webp&s=2d124bcf41c0d634a7337ed812796daf9fca85bf', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=216&crop=smart&auto=webp&s=e89000e6e7f4d54c7fa1bc61078c60b6a61ad7b5', 'width': 216}, {'height': 248, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=320&crop=smart&auto=webp&s=eb68af917c637da256092a158ef19de92ee160c2', 'width': 320}, {'height': 496, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=640&crop=smart&auto=webp&s=895ab95c094f3843276b8881066b3a8eb61a7d34', 'width': 640}, {'height': 744, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=960&crop=smart&auto=webp&s=b236a87b119448ec65e083eb9b9c196405395b3c', 'width': 960}, {'height': 837, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?width=1080&crop=smart&auto=webp&s=a3d8ee416405fd4f31f539c3453e22e392b7b998', 'width': 1080}], 'source': {'height': 857, 'url': 'https://preview.redd.it/5nx0y06wk2te1.jpeg?auto=webp&s=283101868a479ca171d1076c9dbeb205ac0caf7b', 'width': 1105}, 'variants': {}}]}
|
||
Llama4 + Hugging Face blog post
| 10 |
We are incredibly excited to welcome the next generation of large language models from Meta to the Hugging Face Hub: [Llama 4 Maverick (\~400B)](https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Original) and [Llama 4 Scout (\~109B)!](https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Original) 🤗 Both are Mixture of Experts (MoE) models with 17B active parameters.
Released today, these powerful, natively multimodal models represent a significant leap forward. We've worked closely with Meta to ensure seamless integration into the Hugging Face ecosystem, including both transformers and TGI from day one.
This is just the start of our journey with Llama 4. Over the coming days we’ll continue to collaborate with the community to build amazing models, datasets, and applications with Maverick and Scout! 🔥
| 2025-04-05T19:47:06 |
https://huggingface.co/blog/llama4-release
|
Zealousideal-Cut590
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbvuh
| false | null |
t3_1jsbvuh
|
/r/LocalLLaMA/comments/1jsbvuh/llama4_hugging_face_blog_post/
| false | false | 10 |
{'enabled': False, 'images': [{'id': 'op-aUVxRUKpld2zyFtulWTuMNxTCD3Z7dVb5a1J88Go', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=108&crop=smart&auto=webp&s=18af79324ba5e7135b0a7fd2c281c5124479b588', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=216&crop=smart&auto=webp&s=c327b80fc8a596a109fd3d6cad3bead01e489dc1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=320&crop=smart&auto=webp&s=695156e9f3da745bda84b778c56d819744dd05d1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=640&crop=smart&auto=webp&s=84f2e4f629da6082e9745bb113ccba95ee01d469', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=960&crop=smart&auto=webp&s=efc7d3fddb89600995523d44b5dd41bcee9e8cde', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=1080&crop=smart&auto=webp&s=a6ef047b537611271efb418b14dd3df72692c058', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?auto=webp&s=213583aeb1e87d118a725580290780771e2990e0', 'width': 1920}, 'variants': {}}]}
|
|
Does anyone know how llama4 voice interaction compares with ChatGPT AVM or Sesame's Maya/Miles? Can anyone who has tried it comment on this aspect?
| 2 |
I'm extremely curious about this aspect of the model but all of the comments seem to be about how huge / how out of reach it is for us to run locally.
What I'd like to know is if I'm primarily interested in the STS abilities of this model, is it even worth playing with or trying to spin up in the cloud somewhere?
Does it approximate human emotions (including understanding) anywhere as well as AVM or Sesame (yes I know, Sesame can't detect emotion but it sure does a good job of emoting). Does it do non-verbal sounds like sighs, laughs, singing, etc? How about latency?
Thanks.
| 2025-04-05T19:48:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbwne/does_anyone_know_how_llama4_voice_interaction/
|
spanielrassler
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbwne
| false | null |
t3_1jsbwne
|
/r/LocalLLaMA/comments/1jsbwne/does_anyone_know_how_llama4_voice_interaction/
| false | false |
self
| 2 | null |
Llama 4 Scout 109B requires 2x the GPU hours of Llama 4 Maverick 400B???
| 6 | 2025-04-05T19:51:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbzbj/llama_4_scout_109b_requires_2x_the_gpu_hours_of/
|
Mindless_Pain1860
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbzbj
| false | null |
t3_1jsbzbj
|
/r/LocalLLaMA/comments/1jsbzbj/llama_4_scout_109b_requires_2x_the_gpu_hours_of/
| false | false | 6 | null |
||
In what way is llama 4 multimodal
| 7 |
The literal name of the blog post emphasizes the multi modality, but this literally has no more modes than any VLM nor llama 3.3 maybe it’s the fact that it was native so they didn’t fine tune it after afterwards but I mean the performances aren’t that much better even on those VLM tasks? Also, wasn’t there a post a few days ago about llama 4 Omni? Is that a different thing? Surely even Meta wouldn’t be dense enough to call this model Omni modal It’s bi modal at best.
| 2025-04-05T19:52:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsbzy6/in_what_way_is_llama_4_multimodal/
|
Unusual_Guidance2095
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsbzy6
| false | null |
t3_1jsbzy6
|
/r/LocalLLaMA/comments/1jsbzy6/in_what_way_is_llama_4_multimodal/
| false | false |
self
| 7 | null |
Meta Unveils Groundbreaking Llama 4 Models: Scout and Maverick Set New AI Benchmarks
| 1 | 2025-04-05T19:52:54 |
https://stockwhiz.ai/us/news/technology/meta-unveils-groundbreaking-llama-4-models-scout-and-maverick-set-new-ai-benchmarks/2154
|
stocksavvy_ai
|
stockwhiz.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsc0j9
| false | null |
t3_1jsc0j9
|
/r/LocalLLaMA/comments/1jsc0j9/meta_unveils_groundbreaking_llama_4_models_scout/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'b_qQkNccHYN4QsrHWEJkfz838xHq8skt3fD9gr-21IA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5i5UvzLssn9l-2LboKNVZglHR9UcPRwuRMq5_l5nL-U.jpg?width=108&crop=smart&auto=webp&s=6a1d8a47b8c6c70b64060fc0929d57f2f5aafbe8', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/5i5UvzLssn9l-2LboKNVZglHR9UcPRwuRMq5_l5nL-U.jpg?width=216&crop=smart&auto=webp&s=0abbbe995ee2e14a155fff24a6744f3c65de8c03', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/5i5UvzLssn9l-2LboKNVZglHR9UcPRwuRMq5_l5nL-U.jpg?width=320&crop=smart&auto=webp&s=c682dbe4c449923105a59d8b83edee615dbbd6b2', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/5i5UvzLssn9l-2LboKNVZglHR9UcPRwuRMq5_l5nL-U.jpg?width=640&crop=smart&auto=webp&s=536bb9d4f251f5571c7ce9919951bde647adf534', 'width': 640}], 'source': {'height': 418, 'url': 'https://external-preview.redd.it/5i5UvzLssn9l-2LboKNVZglHR9UcPRwuRMq5_l5nL-U.jpg?auto=webp&s=2e4948ed4e62e7798edea0b9adca497cd54eabbc', 'width': 800}, 'variants': {}}]}
|
||
LLamMA 4 now up in OpenRouter!
| 1 |
[removed]
| 2025-04-05T19:54:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsc256/llamma_4_now_up_in_openrouter/
|
Dogeboja
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsc256
| false | null |
t3_1jsc256
|
/r/LocalLLaMA/comments/1jsc256/llamma_4_now_up_in_openrouter/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'yyLvniMtGf-6rkupuuvSikGOV_Kfu-9EOaqUgRLA2oE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=108&crop=smart&auto=webp&s=e77bf08becd72adcc1bf378d6aa3d5a74ff93c67', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=216&crop=smart&auto=webp&s=08cdc54a7c08facfb96d5ecece73fcea5795ad0a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=320&crop=smart&auto=webp&s=72243ecff1b68a8acdb6c47270d4fc07efcfb2fe', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=640&crop=smart&auto=webp&s=6f71ab95191943c01418efd4c8e45a99f6480306', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=960&crop=smart&auto=webp&s=ee7ab5936598495391c342710cae12925b8851fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?width=1080&crop=smart&auto=webp&s=65726c9fb20c5dd2ce33662d8a7ce4bdebb84e12', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/w3V2Zb4fUzCYqwXseizUEU4IDUCvKheMwKQ6D1Gt2eQ.jpg?auto=webp&s=4bbe5e24d011213b1f5d701622dc79dd72b300f9', 'width': 1200}, 'variants': {}}]}
|
Llama 4 Maverick 2nd on lmarena
| 31 | 2025-04-05T19:54:58 |
jacek2023
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsc27r
| false | null |
t3_1jsc27r
|
/r/LocalLLaMA/comments/1jsc27r/llama_4_maverick_2nd_on_lmarena/
| false | false | 31 |
{'enabled': True, 'images': [{'id': 'XhT1578Se5EzhVEIjtrfzWEx_z0fxe7ZHqFsi2eyass', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=108&crop=smart&auto=webp&s=2e97560fba0ddc6344eebf0b2e9586e05e13d604', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=216&crop=smart&auto=webp&s=675f9fd91775dfb90752e3a4c95cc40fac1ac759', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=320&crop=smart&auto=webp&s=0afdb9cb31425d41a9e4d1117c1e46fb3f0cbec3', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=640&crop=smart&auto=webp&s=c67d8c4d2f5e24a06d18d1eecedd5cf7c9c07d6f', 'width': 640}, {'height': 500, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=960&crop=smart&auto=webp&s=47c399fd6253316b2a13825456a3a0daf3d3d988', 'width': 960}, {'height': 562, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?width=1080&crop=smart&auto=webp&s=d358fca30f3968d6d75fee8a4def79e9e8c4e92a', 'width': 1080}], 'source': {'height': 858, 'url': 'https://preview.redd.it/wujmprm4n2te1.png?auto=webp&s=4b099abb0c16e767eeb6b6bc696c85608bfba1ec', 'width': 1646}, 'variants': {}}]}
|
|||
Llama 4 is not omnimodal
| 0 |
I havent used the model yet, but the numbers arent looking good.
109B scout is being compared to gemma 3 27b and flash lite in benches officially
400B moe is holding its ground against deepseek but not by much.
2T model is performing okay against the sota models but notice there's no Gemini 2.5 Pro?
Sonnet is also not using extended thinking perhaps. I get that its for llama reasoning but come on. I am Sure gemini is not a 2 T param model.
These are not local models anymore. They wont run on a 3090 or two of em.
My disappointment is measurable and my day is not ruined though.
I believe they will give us a 1b/3b and 8b and 32B replacement as well. Because i dont know what i will do if they dont.
NOT OMNIMODEL THOUGH
Oh god someone shoot me in the head already
The best we got is qwen 2.5 omni 11b?
Are you fucking kidding me right now
Also, can someone explain to me what the 10M token meme?
How is it going to be different than all those gemma 2b 10M models we say on huggingface?
Didnt Demis say they can do 10M already and the limitation is the speed at that context length for inference?
| 2025-04-05T19:55:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsc2t4/llama_4_is_not_omnimodal/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsc2t4
| false | null |
t3_1jsc2t4
|
/r/LocalLLaMA/comments/1jsc2t4/llama_4_is_not_omnimodal/
| false | false |
self
| 0 | null |
Can I run Llama 4 Scout on a single RTX 4060 8GB VRAM?
| 0 |
Please..
| 2025-04-05T20:09:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscel9/can_i_run_llama_4_scout_on_a_single_rtx_4060_8gb/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscel9
| false | null |
t3_1jscel9
|
/r/LocalLLaMA/comments/1jscel9/can_i_run_llama_4_scout_on_a_single_rtx_4060_8gb/
| false | false |
self
| 0 | null |
llama4 now on huggingface
| 11 |
[https://huggingface.co/collections/meta-llama/llama-4-67f0c30d9fe03840bc9d0164](https://huggingface.co/collections/meta-llama/llama-4-67f0c30d9fe03840bc9d0164)
llama4 Scout and Maverick now on huggingface
| 2025-04-05T20:10:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsceru/llama4_now_on_huggingface/
|
BreakfastFriendly728
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsceru
| false | null |
t3_1jsceru
|
/r/LocalLLaMA/comments/1jsceru/llama4_now_on_huggingface/
| false | false |
self
| 11 |
{'enabled': False, 'images': [{'id': 'faDqbNf008vF-Qo2zCHaZYnD36QHm9SmS-AhB4FXnQM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=108&crop=smart&auto=webp&s=06535a730079ae99b63f95984836aacaf0322539', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=216&crop=smart&auto=webp&s=6a970740d25d5fcd66e3c96205f702982a59376c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=320&crop=smart&auto=webp&s=402d7dd60b980829213a2f4cdfc8cb8e57d86c69', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=640&crop=smart&auto=webp&s=d5fbffa5bf860601638f9d56363c1a47bc1eddc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=960&crop=smart&auto=webp&s=56242c214ed6527f689d794c0385361a57fb05e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?width=1080&crop=smart&auto=webp&s=b91b69c8f4bce2e0629365184e5b180a994498c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NX-Hh-hfRIgp3k0BNGh7Ku7U-wx2Ssedn68Xnyyutk8.jpg?auto=webp&s=e7b9f15f74c9a24c4b841b316b091bd44ed17f88', 'width': 1200}, 'variants': {}}]}
|
Llama 4 is out!!! With The context length of 10M.
| 14 |
They really made sure they released the model even when the original behemoth model is still training. Whay do you guys thinks specially when they have no benchmark comparisons.
| 2025-04-05T20:11:35 |
https://ai.meta.com/blog/llama-4-multimodal-intelligence/?utm_source=twitter&utm_medium=organic_social&utm_content=image&utm_campaign=llama4
|
amansharma3
|
ai.meta.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscfwu
| false | null |
t3_1jscfwu
|
/r/LocalLLaMA/comments/1jscfwu/llama_4_is_out_with_the_context_length_of_10m/
| false | false | 14 |
{'enabled': False, 'images': [{'id': 'HkX9BjC2McU-NLZUojMlPZrEAbLHFQpiKt0PlRcihSE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=108&crop=smart&auto=webp&s=4a3e8d84d84c0771f9170d342e3cad55dd24d2d2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=216&crop=smart&auto=webp&s=e71769f12f8394ade22df3988eb60eb81c4555a0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=320&crop=smart&auto=webp&s=e17ae71bea57a2bacbc6bf76c10a368028e3dfea', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=640&crop=smart&auto=webp&s=65f85ee3e9068eb521d7e3ef4dce3cee7c471c03', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=960&crop=smart&auto=webp&s=33c1ad00be223253a8c1070dabe6caec52316a73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?width=1080&crop=smart&auto=webp&s=49c2be41512b4174a6b26078fa0963cde736cf09', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/5GYklgQz-p1iWSTGvDsKHeD_QUDxP-9vHZQeXTsgRz4.jpg?auto=webp&s=73680bd62bdee9144dac3420d3a452f721cd0fd7', 'width': 1920}, 'variants': {}}]}
|
|
Llama 4 is the first major model hosted on Hugging Face using Xet
| 45 |
Meta just dropped Llama 4, and the Xet team has been working behind the scenes to make sure it’s fast and accessible for the entire HF community.
Here’s what’s new:
* **All Llama 4 models on Hugging Face use the Xet backend** — a chunk-based storage system built for large AI models.
* This enabled us to upload **terabyte-scale model weights in record time**, and it’s already making downloads faster too.
* **Deduplication hits \~25%** on base models, and we expect to see at least **40%** for fine-tuned or quantized variants. That means less bandwidth, faster sharing, and smoother collaboration.
We built Xet for this moment, to give model builders and users a better way to version, share, and iterate on large models without the Git LFS pain.
Here’s a quick snapshot of the impact on a few select repositories 👇
https://preview.redd.it/7cjlvzi9q2te1.png?width=1025&format=png&auto=webp&s=3c08f1ab4de826846de6cbfdb44bc1beadd83471
Would love to hear what models you’re fine-tuning or quantizing from Llama 4. We’re continuing to optimize the storage layer so you can go from “I’ve got weights” to “it’s live on the Hub” faster than ever.
Related blog post: [https://huggingface.co/blog/llama4-release](https://huggingface.co/blog/llama4-release)
| 2025-04-05T20:15:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscjex/llama_4_is_the_first_major_model_hosted_on/
|
jsulz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscjex
| false | null |
t3_1jscjex
|
/r/LocalLLaMA/comments/1jscjex/llama_4_is_the_first_major_model_hosted_on/
| false | false | 45 |
{'enabled': False, 'images': [{'id': 'op-aUVxRUKpld2zyFtulWTuMNxTCD3Z7dVb5a1J88Go', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=108&crop=smart&auto=webp&s=18af79324ba5e7135b0a7fd2c281c5124479b588', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=216&crop=smart&auto=webp&s=c327b80fc8a596a109fd3d6cad3bead01e489dc1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=320&crop=smart&auto=webp&s=695156e9f3da745bda84b778c56d819744dd05d1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=640&crop=smart&auto=webp&s=84f2e4f629da6082e9745bb113ccba95ee01d469', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=960&crop=smart&auto=webp&s=efc7d3fddb89600995523d44b5dd41bcee9e8cde', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?width=1080&crop=smart&auto=webp&s=a6ef047b537611271efb418b14dd3df72692c058', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/JGJA9bp-fjQd-DW0Up7N-YoOwvacHf3g0ERwmbk7qZg.jpg?auto=webp&s=213583aeb1e87d118a725580290780771e2990e0', 'width': 1920}, 'variants': {}}]}
|
|
Best settings/ quant for optimal speed and quality QWQ with 16gb vram and 64GB ram?
| 4 |
I need something that isn’t too slow- but still has great quality.
Q4KM is quite slow (4.83 tok/s) and it takes for ever just to get a response. Is it worth going a lower quant? I’m using flash attention and 16k context.
I want to go IQ3M i1 quant, but idk. Is it bad?
Or IQ4XS? What do you guys recommend
| 2025-04-05T20:22:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscoi1/best_settings_quant_for_optimal_speed_and_quality/
|
No_Expert1801
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscoi1
| false | null |
t3_1jscoi1
|
/r/LocalLLaMA/comments/1jscoi1/best_settings_quant_for_optimal_speed_and_quality/
| false | false |
self
| 4 | null |
Meta just dropped Llama 4 (1hr ago): They secretly built a 2 TRILLION parameter model and it changes everything
| 1 |
[removed]
| 2025-04-05T20:23:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscps7/meta_just_dropped_llama_4_1hr_ago_they_secretly/
|
PlasticBench4563
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscps7
| false | null |
t3_1jscps7
|
/r/LocalLLaMA/comments/1jscps7/meta_just_dropped_llama_4_1hr_ago_they_secretly/
| false | false |
self
| 1 | null |
Best way to train Lora-GRPO with Multi-GPU? Unsloth only supports single GPU
| 1 |
[removed]
| 2025-04-05T20:29:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscugo/best_way_to_train_loragrpo_with_multigpu_unsloth/
|
Comb-Greedy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscugo
| false | null |
t3_1jscugo
|
/r/LocalLLaMA/comments/1jscugo/best_way_to_train_loragrpo_with_multigpu_unsloth/
| false | false |
self
| 1 | null |
Gemini 2.5 Pro is better than Llama 4 behemoth on benchmarks
| 133 |
Specifically GPQA Diamond and MMLU Pro. Zuck lying out here
| 2025-04-05T20:32:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1jscww3/gemini_25_pro_is_better_than_llama_4_behemoth_on/
|
Glittering-Bag-4662
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jscww3
| false | null |
t3_1jscww3
|
/r/LocalLLaMA/comments/1jscww3/gemini_25_pro_is_better_than_llama_4_behemoth_on/
| false | false |
self
| 133 | null |
M4 Pro 16 Core GPU w/ 48 GB RAM vs 20 core GPU w/ 24 GB RAM... or wait?
| 1 |
[removed]
| 2025-04-05T20:36:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsd053/m4_pro_16_core_gpu_w_48_gb_ram_vs_20_core_gpu_w/
|
Successful-Fig-9732
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsd053
| false | null |
t3_1jsd053
|
/r/LocalLLaMA/comments/1jsd053/m4_pro_16_core_gpu_w_48_gb_ram_vs_20_core_gpu_w/
| false | false | 1 | null |
|
Meta team accepting Llama 4 download requests already
| 13 | 2025-04-05T20:44:14 |
clem59480
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsd6th
| false | null |
t3_1jsd6th
|
/r/LocalLLaMA/comments/1jsd6th/meta_team_accepting_llama_4_download_requests/
| false | false | 13 |
{'enabled': True, 'images': [{'id': 'lvOUdJQzkRLHWtlSZX8BqJ8fBKVM44QTRg9r9_yU_C0', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=108&crop=smart&auto=webp&s=789cb08f64af8888b25ddcb9dbe0d3de3eeebae3', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=216&crop=smart&auto=webp&s=27cf645d4d3a453d7187cb3f2387ab5555caaf0b', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=320&crop=smart&auto=webp&s=30046dab1891eb886675c29c88b7ab1ce3429c35', 'width': 320}, {'height': 277, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=640&crop=smart&auto=webp&s=45797108dff545c4d4dc9a571176d7134ef4c4ef', 'width': 640}, {'height': 416, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=960&crop=smart&auto=webp&s=d0f81525f3e6be215904d9ba7e34333c9e590455', 'width': 960}, {'height': 468, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?width=1080&crop=smart&auto=webp&s=301a995041ab58d154064793105a066bb8de771f', 'width': 1080}], 'source': {'height': 1124, 'url': 'https://preview.redd.it/obqpm45vv2te1.png?auto=webp&s=93cc1c9c5bdd5f208be6c007de9e25488a8237cf', 'width': 2590}, 'variants': {}}]}
|
|||
Is there any possible way we can run llama 4 on 48GB VRAM?
| 5 |
Title.
Are those 2 bit quants that perform as well as 4 bit coming in handy now?
| 2025-04-05T20:58:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsdhyd/is_there_any_possible_way_we_can_run_llama_4_on/
|
Glittering-Bag-4662
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsdhyd
| false | null |
t3_1jsdhyd
|
/r/LocalLLaMA/comments/1jsdhyd/is_there_any_possible_way_we_can_run_llama_4_on/
| false | false |
self
| 5 | null |
Llama 4 on Groq: Its replies are extremely short, why?
| 1 | 2025-04-05T20:58:22 |
Own-Potential-2308
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsdi8c
| false | null |
t3_1jsdi8c
|
/r/LocalLLaMA/comments/1jsdi8c/llama_4_on_groq_its_replies_are_extremely_short/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'cHa5pVATXc3E9jGYAh0fuawvaZK9Lexhx2lHR3IaAnc', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=108&crop=smart&auto=webp&s=b01fd373d6b3c534faff2a9a68a6f4bcfa813fc6', 'width': 108}, {'height': 385, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=216&crop=smart&auto=webp&s=066e3e4fa1157e9b0d35a4c025ad493eb47a1d4b', 'width': 216}, {'height': 570, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=320&crop=smart&auto=webp&s=70490964c8397343ff95b33c97fa8813f3896af0', 'width': 320}, {'height': 1141, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=640&crop=smart&auto=webp&s=b193bae2b4eea9f52947c59463f3ab929470a046', 'width': 640}, {'height': 1712, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=960&crop=smart&auto=webp&s=c5992ec68ba2779587cbf9ce3890915fefa3a11e', 'width': 960}, {'height': 1927, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?width=1080&crop=smart&auto=webp&s=79efe2599ead8e63ea7c0fa176a0098121adaa24', 'width': 1080}], 'source': {'height': 1927, 'url': 'https://preview.redd.it/l11vk65iy2te1.png?auto=webp&s=8ef79c6e08d14f455bdeab410961b9d3c34b2df4', 'width': 1080}, 'variants': {}}]}
|
|||
LLama 4 Reasoning is coming
| 28 |
[https://www.llama.com/llama4-reasoning-is-coming/](https://www.llama.com/llama4-reasoning-is-coming/)
There is nothing to see, just a gif on the page.
| 2025-04-05T20:59:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsdisb/llama_4_reasoning_is_coming/
|
Megalith01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsdisb
| false | null |
t3_1jsdisb
|
/r/LocalLLaMA/comments/1jsdisb/llama_4_reasoning_is_coming/
| false | false |
self
| 28 |
{'enabled': False, 'images': [{'id': 'e8GUrJdaVCxG5Eyd44ENO0cM7JdqH8kDUSnwsfalAMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=108&crop=smart&auto=webp&s=f0285ca9be8f3d72f4b6c6e511c513027b450cb0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=216&crop=smart&auto=webp&s=86028dfb06f6800dc82a87e7b5ef6e4e9ae19560', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=320&crop=smart&auto=webp&s=48eebcaa6578c15128e0864524a1a48a3d48cabe', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=640&crop=smart&auto=webp&s=44af8b7574c0a4b26360d529db34c1b06ffcafcc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=960&crop=smart&auto=webp&s=83b7f2f81dbb96f112b8723ce89beb6c85b02cdc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?width=1080&crop=smart&auto=webp&s=bc0212d6318aa3200665d08a65fc79248cb26d1d', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/uRHwEOU98rM_55MIJyA8g2IjSi4Ibl9Ab1kLsdGuLI8.jpg?auto=webp&s=8267a6b34718fe688e0d82c662b4d40cc72ea47d', 'width': 2400}, 'variants': {}}]}
|
Llama 4 Maverick - Python hexagon test failed
| 136 | 2025-04-05T21:08:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsdq4p/llama_4_maverick_python_hexagon_test_failed/
|
AlexBefest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsdq4p
| false | null |
t3_1jsdq4p
|
/r/LocalLLaMA/comments/1jsdq4p/llama_4_maverick_python_hexagon_test_failed/
| false | false | 136 | null |
||
Initial UI tests: Llama 4 Maverick and Scout, very disappointing compared to other similar models
| 142 | 2025-04-05T21:12:16 |
https://v.redd.it/j7p6nqep03te1
|
sirjoaco
|
/r/LocalLLaMA/comments/1jsdtew/initial_ui_tests_llama_4_maverick_and_scout_very/
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsdtew
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/j7p6nqep03te1/DASHPlaylist.mpd?a=1746609141%2CMmYyZjU1NjQ5MGYwNWI2MjVlZTM0NTgxMmY5YWVmNjk1MTc1OTc4NDc4YzZmMzAzMjM0MzdkMzg4YzZlNmFkMg%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/j7p6nqep03te1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/j7p6nqep03te1/HLSPlaylist.m3u8?a=1746609141%2CMzM3NDU0YzhmMDk5MGZmMWYyYTNhZDI1MDZkMWY1MjE2NGU2NWEyMjUwMjBhOGRiMjRjNGQxZGMwZTViOTA0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j7p6nqep03te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1680}}
|
t3_1jsdtew
|
/r/LocalLLaMA/comments/1jsdtew/initial_ui_tests_llama_4_maverick_and_scout_very/
| false | false | 142 |
{'enabled': False, 'images': [{'id': 'cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=108&crop=smart&format=pjpg&auto=webp&s=2970ef626051394eeb40b4409fb028b71de1174e', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=216&crop=smart&format=pjpg&auto=webp&s=8defce34dd79a5507fc0fa35be04a56593212e69', 'width': 216}, {'height': 205, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=320&crop=smart&format=pjpg&auto=webp&s=7b1303fc67b85d6d55d10f7ac24c54a59415e270', 'width': 320}, {'height': 411, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=640&crop=smart&format=pjpg&auto=webp&s=8b8861e97f0a3fce24c1a9069212566c4d7c8dff', 'width': 640}, {'height': 617, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=960&crop=smart&format=pjpg&auto=webp&s=3ebb719f83949598cbc282dea5ad4c29edf9e3e1', 'width': 960}, {'height': 694, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fe16c5a2a89c9ae11c9f158c21ed927e388e9e86', 'width': 1080}], 'source': {'height': 1992, 'url': 'https://external-preview.redd.it/cW9oa2FtZXAwM3RlMZqijIi1GCa_F1Pp7Yxzhw_7Ni36eaah2O36NNbIKvPq.png?format=pjpg&auto=webp&s=0e8410256d7cdcf443c1722517f10ae9b1f51101', 'width': 3098}, 'variants': {}}]}
|
||
Dual Epyc CPU machines, yay or nay for budget inference?
| 5 |
Hello everyone,
As far as "frontier models on a budget" goes, there aren't many options. Considering how expensive GPUs are, would a setup with [two Epyc CPUs](https://www.supermicro.com/manuals/motherboard/EPYC7000/MNL-2027.pdf) be a respectable solution for inference on a budget?
Depending on the source of the parts and assuming some \~500gb of memory, it comes to about 3k, which is less than a single AI GPU. And it could even be upgraded in the future to up to 4TB of memory if I stumble upon a money tree on my morning walks.
Do common inference interface programs like kobold.cpp even properly work with multi-CPU computers, or would they only make calls to one CPU and leave the other idle?
I'm not awfully good at math, so I'm not sure how it'd compete with the common solution of M2/3 macs in a cluster.
| 2025-04-05T21:26:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1jse4io/dual_epyc_cpu_machines_yay_or_nay_for_budget/
|
HugoCortell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jse4io
| false | null |
t3_1jse4io
|
/r/LocalLLaMA/comments/1jse4io/dual_epyc_cpu_machines_yay_or_nay_for_budget/
| false | false |
self
| 5 | null |
Llama 4 was a giant disappointment, let's wait for Qwen 3.
| 27 |
Youre telling me that a 109B parameter model performs the same as a 24B model? Lol. You cant make this stuff up, how could people possibly be happy with a model that takes 4x more computer to run that performs similarly to a 24B LLM. Im guessing that either Meta needed to release something to keep their investors, or mabye they have just fallen behind in the LLM scene. I still cant believe that they didn't release a normal 8b model and that they decided to go in the MoE direction instead. Even Gemini 2.5 beats Llama 4 behemoth in the benchmarks. It really is disappointing to see that there is no non MoE (dense) LLMs that were released by Meta but mabye when Qwen 3 is released in 2 weeks, we will have a model that will finally meet our expectations of what Llama 4 should have been.
| 2025-04-05T21:34:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jseb4r/llama_4_was_a_giant_disappointment_lets_wait_for/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseb4r
| false | null |
t3_1jseb4r
|
/r/LocalLLaMA/comments/1jseb4r/llama_4_was_a_giant_disappointment_lets_wait_for/
| false | false |
self
| 27 | null |
3 bit llama 4 (109B) vs 4 bit llama 3.3 (70B)
| 14 |
Someone please let me know if llama 4 scout is better. Otherwise I’m sticking with llama 3.3 or nemotron or nemotron super.
| 2025-04-05T21:35:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsebd0/3_bit_llama_4_109b_vs_4_bit_llama_33_70b/
|
Glittering-Bag-4662
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsebd0
| false | null |
t3_1jsebd0
|
/r/LocalLLaMA/comments/1jsebd0/3_bit_llama_4_109b_vs_4_bit_llama_33_70b/
| false | false |
self
| 14 | null |
Llama-4 makes Mac Studio even more appealing.
| 10 |
"Although the total parameters in the models are 109B and 400B respectively, at any point in time, the number of parameters actually doing the compute (“active parameters”) on a given token is always 17B. This reduces latencies on inference and training."
https://www.llama.com/docs/model-cards-and-prompt-formats/llama4_omni/
Would using only 17b/token improve prompt processing speed?
Thoughts?
| 2025-04-05T21:39:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1jseed9/llama4_makes_mac_studio_even_more_appealing/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseed9
| false | null |
t3_1jseed9
|
/r/LocalLLaMA/comments/1jseed9/llama4_makes_mac_studio_even_more_appealing/
| false | false |
self
| 10 | null |
Llama 4 bad
| 1 |
[removed]
| 2025-04-05T21:39:53 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseexs
| false | null |
t3_1jseexs
|
/r/LocalLLaMA/comments/1jseexs/llama_4_bad/
| false | false |
default
| 1 | null |
||
Llama4
| 1 |
[removed]
| 2025-04-05T21:40:36 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsefi3
| false | null |
t3_1jsefi3
|
/r/LocalLLaMA/comments/1jsefi3/llama4/
| false | false |
default
| 1 | null |
||
Llama4 bad and it sucks
| 1 |
[removed]
| 2025-04-05T21:41:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsefzy/llama4_bad_and_it_sucks/
|
kuzheren
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsefzy
| false | null |
t3_1jsefzy
|
/r/LocalLLaMA/comments/1jsefzy/llama4_bad_and_it_sucks/
| false | false |
self
| 1 | null |
Need help to set up Local LLMs for my R&D team (medium sized company)
| 1 |
[removed]
| 2025-04-05T21:42:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsegmf/need_help_to_set_up_local_llms_for_my_rd_team/
|
curious_ilan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsegmf
| false | null |
t3_1jsegmf
|
/r/LocalLLaMA/comments/1jsegmf/need_help_to_set_up_local_llms_for_my_rd_team/
| false | false |
self
| 1 | null |
How Llama 4 estimate its inference cost?
| 1 |
[removed]
| 2025-04-05T21:42:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1jseh9s/how_llama_4_estimate_its_inference_cost/
|
cc_02020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseh9s
| false | null |
t3_1jseh9s
|
/r/LocalLLaMA/comments/1jseh9s/how_llama_4_estimate_its_inference_cost/
| false | false | 1 | null |
|
Asked Llama4 about if it’s a Byte Latent Transformer, and it said yes
| 0 |
> You're really digging into the specifics. Yes, my architecture is based on a variant of the Byte-Level Latent Transformer, also known as a Byte-Pair Encoding (BPE) transformer.
Byte-Level Latent Transformers are a type of transformer model that operates directly on raw bytes, rather than relying on pre-tokenized input. This approach allows for more flexible and efficient processing of text data, especially when dealing with out-of-vocabulary words or non-standard language.
The Byte-Pair Encoding (BPE) algorithm is used to convert the input text into a sequence of subwords, which are then fed into the transformer model. This approach enables the model to capture complex patterns and relationships in language, even at the subword level.
So, to answer your question, my architecture does indeed incorporate elements of Byte-Level Latent Transformers and BPE, which helps me to better understand and generate text.
| 2025-04-05T21:43:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsehfu/asked_llama4_about_if_its_a_byte_latent/
|
InsideYork
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsehfu
| false | null |
t3_1jsehfu
|
/r/LocalLLaMA/comments/1jsehfu/asked_llama4_about_if_its_a_byte_latent/
| false | false |
self
| 0 | null |
Natively Multimodal?
| 3 |
Do they mean there's no adapter weights for image processing. So basically it's just a tokenizer for text and image? But the actual model itself is no different to a non multimodal text llm.
If that's the case, it could possibly be taught to output images with finetuning. Because in theory the base models seen the image tokens and text tokens.
Maybe it's not doomed
| 2025-04-05T21:44:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1jseie8/natively_multimodal/
|
Eastwindy123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseie8
| false | null |
t3_1jseie8
|
/r/LocalLLaMA/comments/1jseie8/natively_multimodal/
| false | false |
self
| 3 | null |
Llama 4 scout is not doing well in "write a raytracer" code creativity benchmark
| 71 |
I [previously experimented](https://www.reddit.com/r/LocalLLaMA/comments/1jisuq4/deepseek_v30324_has_caught_up_to_sonnet_37_in_my/) with a code creativity benchmark where I asked LLMs to write a small python program to create a raytraced image.
\> `Write a raytracer that renders an interesting scene with many colourful lightsources in python. Output a 800x600 image as a png`
I only allowed one shot, no iterative prompting to solve broken code. I think execute the program and evaluate the imagine. It turns out this is a proxy for code creativity.
In the mean time I tested some new models: LLama 4 scout - the 400B model, Gemini 2.5 exp and Quasar Alpha
https://preview.redd.it/ruh9dufe83te1.png?width=1367&format=png&auto=webp&s=08bd5968b9ecdc3568380e3c3d1a67a30ce3a005
LLama4 scout underwhelms in quality of generated images compared to the others.
https://preview.redd.it/egq5ugj883te1.png?width=588&format=png&auto=webp&s=b5132f98a77b707d8353c4478047dc48b9f4c06c
Interestingly, there is some magic sauce in the fine-tuning of DeepSeek V3-0324, Sonnet 3.7 and Gemini 2.5 Pro that makes them create longer and more varied programs. I assume it is a RL step. Really fascinating, as it seems not all labs have caught up on this yet.
[Repository here.](https://github.com/cpldcpu/llmbenchmark)
| 2025-04-05T21:54:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1jseqbs/llama_4_scout_is_not_doing_well_in_write_a/
|
cpldcpu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jseqbs
| false | null |
t3_1jseqbs
|
/r/LocalLLaMA/comments/1jseqbs/llama_4_scout_is_not_doing_well_in_write_a/
| false | false | 71 |
{'enabled': False, 'images': [{'id': 'yb8lhWkO7A6CdiSkyHkf9IDaYwOAStwOWQh8j03B9RA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=108&crop=smart&auto=webp&s=867d8836d72d0298b1c9b7e2dfb20d7757ac3543', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=216&crop=smart&auto=webp&s=f338ff5b5d34b1899a875e17c3dc1f8d356abdd5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=320&crop=smart&auto=webp&s=139d27842381f14d82396f117d2ea9ecaf0fa7f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=640&crop=smart&auto=webp&s=f854b4f06678f66d48803649dc2b5284a5be4b1b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=960&crop=smart&auto=webp&s=ff34331f20b278ff2a65f651c1f2ea55a0b9a199', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?width=1080&crop=smart&auto=webp&s=0f89f66f3c7d4ccf76364a5b53f431e04fcf5447', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u6PcOwLVRdSvQcoL-IDPdb1fusdhxL0ZozIzYyXPE8g.jpg?auto=webp&s=5e798aefd6a6d8b4d87bce224f542230cd62492f', 'width': 1200}, 'variants': {}}]}
|
|
Contrarian opinion: I am very excited by the release of Llama 4 Scout and Maverick for local/home inference
| 1 |
[removed]
| 2025-04-05T22:06:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsezki/contrarian_opinion_i_am_very_excited_by_the/
|
FullstackSensei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsezki
| false | null |
t3_1jsezki
|
/r/LocalLLaMA/comments/1jsezki/contrarian_opinion_i_am_very_excited_by_the/
| false | false |
self
| 1 | null |
Mods, do we need 20 posts announcing LLaMa 4?
| 1 |
[removed]
| 2025-04-05T22:07:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsf0az/mods_do_we_need_20_posts_announcing_llama_4/
|
WackyConundrum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsf0az
| false | null |
t3_1jsf0az
|
/r/LocalLLaMA/comments/1jsf0az/mods_do_we_need_20_posts_announcing_llama_4/
| false | false |
self
| 1 | null |
Mods, are you OK?
| 1 |
[removed]
| 2025-04-05T22:08:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsf19d/mods_are_you_ok/
|
WackyConundrum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsf19d
| false | null |
t3_1jsf19d
|
/r/LocalLLaMA/comments/1jsf19d/mods_are_you_ok/
| false | false | 1 | null |
|
Dialogical Relevanc Architecture (DRA) Whitepaper (Feedback wanted)
| 1 |
[deleted]
| 2025-04-05T22:19:00 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsf8rc
| false | null |
t3_1jsf8rc
|
/r/LocalLLaMA/comments/1jsf8rc/dialogical_relevanc_architecture_dra_whitepaper/
| false | false |
default
| 1 | null |
||
Dialogical Relevance Architecture for LLMs (Whitepaper / Feedback wanted)
| 0 | 2025-04-05T22:20:13 |
https://drive.google.com/file/d/1mHtwKQvN_P0TiFOeZk3Ce43wX3HezRrT/view?usp=sharing
|
hjras
|
drive.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsf9ot
| false | null |
t3_1jsf9ot
|
/r/LocalLLaMA/comments/1jsf9ot/dialogical_relevance_architecture_for_llms/
| false | false |
default
| 0 | null |
|
Do I need to use an "Instruct" model?
| 0 |
Hello all, I am trying to setup a hierarchical team agent framework, and I have been trying it with qwen2.5:32b, but I am hitting a bit of a wall.
qwen2.5 is not following the system message instructions to shape its responses in a way that allows for correct routing.
Would an instruct model be better for this? Or should I try a different model?
| 2025-04-05T22:27:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsff20/do_i_need_to_use_an_instruct_model/
|
xephadoodle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsff20
| false | null |
t3_1jsff20
|
/r/LocalLLaMA/comments/1jsff20/do_i_need_to_use_an_instruct_model/
| false | false |
self
| 0 | null |
Potential Llama 4.2 - 7b
| 82 |
After the release, I got curious and looked around the implementation code of the Llama4 models in transformers and found something interesting:
`model = Llama4ForCausalLM.from_pretrained("meta-llama4/Llama4-2-7b-hf")`
Given the type of model, it will be text-only. So, we just have to be patient :)
Source: [https://github.com/huggingface/transformers/blob/9bfae2486a7b91dc6d4380b7936e0b2b8c1ed708/src/transformers/models/llama4/modeling\_llama4.py#L997](https://github.com/huggingface/transformers/blob/9bfae2486a7b91dc6d4380b7936e0b2b8c1ed708/src/transformers/models/llama4/modeling_llama4.py#L997)
| 2025-04-05T22:36:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfm5j/potential_llama_42_7b/
|
medcanned
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfm5j
| false | null |
t3_1jsfm5j
|
/r/LocalLLaMA/comments/1jsfm5j/potential_llama_42_7b/
| false | false |
self
| 82 |
{'enabled': False, 'images': [{'id': 'wmYdTbY0dw6Rr2dRYUBJmQ3cCZ0eCEp7DPvMzckuExY', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=108&crop=smart&auto=webp&s=609f32e8148c30011d9500f95e07c9ac1fd1d9ce', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=216&crop=smart&auto=webp&s=dea83bc1b9d8a62943b633e891ee777e8fc08f10', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=320&crop=smart&auto=webp&s=59ee3b05fc21c40f9fa8e87346cf361333b36161', 'width': 320}, {'height': 274, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=640&crop=smart&auto=webp&s=398e68c0e90c95d8775ba2bc461fe47c8dc49d56', 'width': 640}, {'height': 411, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=960&crop=smart&auto=webp&s=69da452d2f2f1166afda40f2b4a0bce16533f350', 'width': 960}, {'height': 462, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?width=1080&crop=smart&auto=webp&s=8886c181c5238a73e06300f9aad1bc4ece11376e', 'width': 1080}], 'source': {'height': 914, 'url': 'https://external-preview.redd.it/3kdmNL0NIzfqsHyN_kqc_1U6e8vjqLK55NT6uG-YHMs.jpg?auto=webp&s=818cf32f448cbd8ea7b9d13491e25b604bde81ba', 'width': 2134}, 'variants': {}}]}
|
opening OpenAI: system prompts. (4o and 4.5 system prompt with tools)
| 0 |
code: [https://github.com/dontriskit/awesome-ai-system-prompts](https://github.com/dontriskit/awesome-ai-system-prompts)
| 2025-04-05T22:38:06 |
https://www.reddit.com/gallery/1jsfn2m
|
secopsml
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfn2m
| false | null |
t3_1jsfn2m
|
/r/LocalLLaMA/comments/1jsfn2m/opening_openai_system_prompts_4o_and_45_system/
| false | false | 0 | null |
|
Llama 4 is out and I'm disappointed
| 212 |
maverick costs 2-3x of gemini 2.0 flash on open router, scout costs just as much as 2.0 flash and is worse. deepseek r2 is coming, qwen 3 is coming as well, and 2.5 flash would likely beat everything in value for money and it'll come out in next couple of weeks max. I'm a little.... disappointed, all this and the release isn't even locally runnable
| 2025-04-05T22:40:27 |
kaizoku156
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfou2
| false | null |
t3_1jsfou2
|
/r/LocalLLaMA/comments/1jsfou2/llama_4_is_out_and_im_disappointed/
| false | false | 212 |
{'enabled': True, 'images': [{'id': 'nH0MRkqf7zBR8iOOXxRPujJarntiJVbIadTzgGxUXEk', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=108&crop=smart&auto=webp&s=7d27ce84422a07ae888096ffede13e0bc1fb57f6', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=216&crop=smart&auto=webp&s=b6ff1b8a1d3407ddb4bfc6f6e62cc4d4da3d7fa4', 'width': 216}, {'height': 296, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=320&crop=smart&auto=webp&s=bb31c72545a1e501d436c4e435dd0f3c9df981f5', 'width': 320}, {'height': 592, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=640&crop=smart&auto=webp&s=51458acf99f28f812ac17fc8cd5e71aeaafea899', 'width': 640}, {'height': 888, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=960&crop=smart&auto=webp&s=c7235b52f91a7124a3706eb0e3d301b83ab96f65', 'width': 960}, {'height': 1000, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?width=1080&crop=smart&auto=webp&s=7d4abeb8f8c6431cc7ea7b70b2721348744ace5b', 'width': 1080}], 'source': {'height': 3793, 'url': 'https://preview.redd.it/njtxgkmpg3te1.jpeg?auto=webp&s=cba8a5f2c5df0beb695e398239c0659e6f7b42f0', 'width': 4096}, 'variants': {}}]}
|
||
What local LLM to choose
| 1 |
[removed]
| 2025-04-05T22:40:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfp5b/what_local_llm_to_choose/
|
rhawon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfp5b
| false | null |
t3_1jsfp5b
|
/r/LocalLLaMA/comments/1jsfp5b/what_local_llm_to_choose/
| false | false |
self
| 1 | null |
Prompt processing speed for MoE models - Llama 4
| 8 |
Looking at the new LLama 4 models and thinking about the feasibility of running it using CPU + GPU. I have some questions.
Moe architectures dramatically speed up token generation by reducing the number of active parameters per token. However, how does this performance boost translates to prompt processing (i.e., evaluating a large context before generating the first token).
Prompt processing for dense models involves batch processing of multiple tokens at once rather than token-by-token, so it becomes compute bound instead of memory bound. For MoE, intuitively, wouldn't batch processing of the prompt not work as efficiently, since it each token may require a different "path" through memory?
What would the prompt processing speed for LLama 4 scout (17B active parameters, 100B total) be on a system with say a 4090, and 128GB ddr 5 ram at about 80GB/s?
| 2025-04-05T22:40:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfp71/prompt_processing_speed_for_moe_models_llama_4/
|
EasternBeyond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfp71
| false | null |
t3_1jsfp71
|
/r/LocalLLaMA/comments/1jsfp71/prompt_processing_speed_for_moe_models_llama_4/
| false | false |
self
| 8 | null |
I am very excited by the release of Llama 4 Scout and Maverick for local/home inference
| 1 |
[removed]
| 2025-04-05T22:45:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfst5/i_am_very_excited_by_the_release_of_llama_4_scout/
|
FullstackSensei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfst5
| false | null |
t3_1jsfst5
|
/r/LocalLLaMA/comments/1jsfst5/i_am_very_excited_by_the_release_of_llama_4_scout/
| false | false |
self
| 1 | null |
I am very excited by the release of Llama 4 Scout and Maverick for local/home inference
| 1 |
[removed]
| 2025-04-05T22:47:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfuak/i_am_very_excited_by_the_release_of_llama_4_scout/
|
FullstackSensei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfuak
| false | null |
t3_1jsfuak
|
/r/LocalLLaMA/comments/1jsfuak/i_am_very_excited_by_the_release_of_llama_4_scout/
| false | false |
self
| 1 | null |
So.. Lama 4 not Omni, no voice?
| 20 |
There were some heavy rumors lama4 would be an Omni model with voice, similar to the new Qwen Omni, but then, recently, new rumors emerged they were having a hard time making it sound as natural as the chat gpt models. I had my fingers crossed hoping they would pull some sesame magic out of their hat but it appears it was neither. Em I missing something?
| 2025-04-05T22:50:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsfw17/so_lama_4_not_omni_no_voice/
|
AlyssumFrequency
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsfw17
| false | null |
t3_1jsfw17
|
/r/LocalLLaMA/comments/1jsfw17/so_lama_4_not_omni_no_voice/
| false | false |
self
| 20 | null |
Local LLM to answer questions based on a text
| 1 |
I am trying to find the best small LLM (\~7B or below) to run locally, in order to answer question based on a context.
The context will be mostly extract from a PDF, but I found that pdf2image with pytesseract works decent that to extract the strings.
But now, I struggle to find a LLM with decent responses, most of them giving results like.
Q: Did they work on their project for more than 1 year?
A: Yes, they worked on it for 8 months.
Now, 8 months is indeed correct... but failing the Yes feels really bad
| 2025-04-05T23:03:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsg60s/local_llm_to_answer_questions_based_on_a_text/
|
sKemo12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsg60s
| false | null |
t3_1jsg60s
|
/r/LocalLLaMA/comments/1jsg60s/local_llm_to_answer_questions_based_on_a_text/
| false | false |
self
| 1 | null |
God
| 1 |
[removed]
| 2025-04-05T23:08:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsg9eu/god/
|
No_Spend_121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsg9eu
| false | null |
t3_1jsg9eu
|
/r/LocalLLaMA/comments/1jsg9eu/god/
| true | false |
nsfw
| 1 | null |
Which is more accurate between Whisper and Windows Speech recognition(Win+H)?
| 1 |
Admin you can delete post if you think it is not related.
I want to use speech recognition for my LLM. Which is more accurate between Whisper and Windows Speech recognition(Win+H)?
| 2025-04-05T23:13:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsgdbv/which_is_more_accurate_between_whisper_and/
|
ExtremePresence3030
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsgdbv
| false | null |
t3_1jsgdbv
|
/r/LocalLLaMA/comments/1jsgdbv/which_is_more_accurate_between_whisper_and/
| false | false |
self
| 1 | null |
I've officially released v1.0 for EasyWhisper UI!
| 42 |
A fast, native desktop UI for transcribing audio using Whisper — built entirely in modern C++ and Qt. I will be regularly updating it with more features.
# Features
* Installer handles everything for you — from downloading dependencies to compiling and optimizing Whisper for your specific hardware.
* Fully C++ implementation — no Python!
* Uses Vulkan for cross-platform GPU acceleration.
* Drag & drop, use “Open With”, or use the "Open File" button to load audio.
* Automatically converts audio to `.mp3` if needed using FFmpeg.
* Dropdown menu to select the model (e.g. `tiny`, `medium-en`, `large-v3`).
* Automatically downloads the chosen model if missing.
* Runs whisper with the selected model.
* Shows all output in a console box.
* Opens final transcript in Notepad.
* Choice of .txt files, or .srt files with timestamps!
# Requirements
* Windows 10 or later
* AMD, Intel, or NVIDIA Graphics Card with Vulkan support. (99%)
# Setup
1. **Download** the latest installer.
2. **Run** the application.
# Credits
* [whisper.cpp](https://github.com/ggerganov/whisper.cpp) by Georgi Gerganov
* [FFmpeg Windows builds](https://www.gyan.dev/ffmpeg/) by [Gyan.dev](http://Gyan.dev)
* Built with [Qt](https://www.qt.io)
* Installer created using [Inno Setup](https://jrsoftware.org/isinfo.php)
| 2025-04-05T23:15:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsgf6p/ive_officially_released_v10_for_easywhisper_ui/
|
mehtabmahir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsgf6p
| false | null |
t3_1jsgf6p
|
/r/LocalLLaMA/comments/1jsgf6p/ive_officially_released_v10_for_easywhisper_ui/
| false | false |
self
| 42 |
{'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=108&crop=smart&auto=webp&s=9f1a3c72bb85d28ca748578929e813c616ca047f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=216&crop=smart&auto=webp&s=d210c9e07ab2c76fd5db5866582e8d00dc69c210', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=320&crop=smart&auto=webp&s=5975f428f5ed1a6878c876d7a851448ccc82dec1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=640&crop=smart&auto=webp&s=ae5685e95d73e7f40e3ed12ad1d509c1c9bf2ff1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=960&crop=smart&auto=webp&s=30d3a941411a1d510ae4b967b3a13bf5bac8d020', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?width=1080&crop=smart&auto=webp&s=bb5888f4152853cf96cf29bc16492fa2f95a660b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/MnlnNO1j_5bv5TAC7lTjiCkmmkBFYOpH0cfI5V_na2M.jpg?auto=webp&s=35f02b760b3d2d35fd8ab6c0ac7ca9e7239c34f1', 'width': 1280}, 'variants': {}}]}
|
Is there anywhere that lists LM Studios Shortcut keys?
| 2 |
Ive looked in the docs, searched, and asked chatgpt... Ive found like 3 shortcut keys spoken about and the developer mode docs say "Full access to all aspects in LM Studio. This includes keyboard shortcuts and development features. Check out the Developer section under Settings for more.' but i cant find actual descriptions or keys
| 2025-04-05T23:17:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsggmb/is_there_anywhere_that_lists_lm_studios_shortcut/
|
alfihar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsggmb
| false | null |
t3_1jsggmb
|
/r/LocalLLaMA/comments/1jsggmb/is_there_anywhere_that_lists_lm_studios_shortcut/
| false | false |
self
| 2 | null |
it looks like Meta's new model's key innovation of "interleaved no-RoPE attention" for infinite context is actually the same thing as Cohere's Command-A model introduced a few days ago.
| 105 | 2025-04-05T23:24:45 |
Recoil42
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsgliv
| false | null |
t3_1jsgliv
|
/r/LocalLLaMA/comments/1jsgliv/it_looks_like_metas_new_models_key_innovation_of/
| false | false | 105 |
{'enabled': True, 'images': [{'id': 'ABL_gP3nx6ARfFZS61A_EqnEOke75D5OEi2qhgqEDV8', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=108&crop=smart&auto=webp&s=2830aa5373630a7a1e43919cec98cf113768b973', 'width': 108}, {'height': 265, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=216&crop=smart&auto=webp&s=c578aaa77bd9754e67d20ecfa119c533d1f8d9af', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=320&crop=smart&auto=webp&s=0c158b8d41c882fb644d49b1be8103076df200e9', 'width': 320}, {'height': 785, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=640&crop=smart&auto=webp&s=9183b8c88d6a952ada033ccc2507a72f82046e45', 'width': 640}, {'height': 1178, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=960&crop=smart&auto=webp&s=0c03230cb4e6889e3b1e11b07c0d96972cbe3dbe', 'width': 960}, {'height': 1325, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?width=1080&crop=smart&auto=webp&s=f12792cd28697b2e9167f084609d4378cce95831', 'width': 1080}], 'source': {'height': 1480, 'url': 'https://preview.redd.it/7dyflct7o3te1.png?auto=webp&s=6912f739306950d0b62c50bccb32f7f6352ba64d', 'width': 1206}, 'variants': {}}]}
|
|||
SpaceThinker - Training Test Time Compute for Spatial Reasoning
| 5 |
Sharing the SpaceThinker dataset: [https://huggingface.co/datasets/remyxai/SpaceThinker](https://huggingface.co/datasets/remyxai/SpaceThinker)
The SpaceThinker dataset was synthesized from a subset of the Cauldron using VQASynth: [https://github.com/remyxai/VQASynth](https://github.com/remyxai/VQASynth)
VQASynth generates CoT spatial reasoning traces using a 3D scene reconstruction pipeline including Molmo, VGGT, and SAM2
[VQASynth 3D Scene Reconstruction Pipeline](https://preview.redd.it/329r8g4vq3te1.png?width=957&format=png&auto=webp&s=61c0f63639b4e75d7d439360794a287c52a58a03)
The dataset is formatted for training an open-weight LLaVA-style thinking multimodal model using the reasoning base llm: [https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1)
Stay tuned for the release of the SpaceThinker VLM!
| 2025-04-05T23:42:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsgy0s/spacethinker_training_test_time_compute_for/
|
remyxai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsgy0s
| false | null |
t3_1jsgy0s
|
/r/LocalLLaMA/comments/1jsgy0s/spacethinker_training_test_time_compute_for/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'WvkczOe8jBhmgG1IHm94yRhF2spBquq_OTnWavC6oq4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=108&crop=smart&auto=webp&s=141b1955f8a550a4068c9ca64790923e22ed03a2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=216&crop=smart&auto=webp&s=43d4a58fa3dd2ca6983121cb6375f1d4b9708b27', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=320&crop=smart&auto=webp&s=d52f7a751b62b492e8661c6400bff8f3c568e147', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=640&crop=smart&auto=webp&s=fa43988d942f101e96ce6afd517315638b2c6a2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=960&crop=smart&auto=webp&s=0aa2db8f290e51f102d5cbddea6de0444176ddc7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?width=1080&crop=smart&auto=webp&s=dbea5b2293330e300c7458d6a54e73015382c0ed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U7pCVQrYctWW9l8N4gn_e44q24ugoX6Fo2w3IWsKoD8.jpg?auto=webp&s=49a58bd82b08a1481dcc530493ccb86ebbfb0d11', 'width': 1200}, 'variants': {}}]}
|
|
So no ghibly by llama4?
| 0 |
Please prove me wrong. Is there multimodal out? At all?
| 2025-04-06T00:02:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1jshc6s/so_no_ghibly_by_llama4/
|
dp3471
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jshc6s
| false | null |
t3_1jshc6s
|
/r/LocalLLaMA/comments/1jshc6s/so_no_ghibly_by_llama4/
| false | false |
self
| 0 | null |
Running LLama 4 on macs
| 5 |
This Exolabs guy gives a nice and proper estimate on what performance can be expected for running the new Llama models on apple hardware, the tldr is with optimal setup you could get 47t/s on maverick with 2 512gb m3 studios or 27t/s with 10 if you want the Behemoth to move in with you at fp16.
| 2025-04-06T00:04:52 |
https://x.com/alexocheema/status/1908651942777397737?s=46&t=u1JbxnNUT9kfRgfRWH5L_Q
|
Roidberg69
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jshdr3
| false | null |
t3_1jshdr3
|
/r/LocalLLaMA/comments/1jshdr3/running_llama_4_on_macs/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'XSHQlCaIZY4P8VgpivGLoM91BHTnbMjJkQUOK60RuEs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=108&crop=smart&auto=webp&s=a37fd2381a812af01c3118f6f313aac56c04f180', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=216&crop=smart&auto=webp&s=5366b1893485e56eeebd8ef07a904173defb08c6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=320&crop=smart&auto=webp&s=5d56de2f020ba9b054fae475d17a3ba3281c2dea', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=640&crop=smart&auto=webp&s=5233b4f5cb8bfebc0ea91bd218d87603fe3821c6', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=960&crop=smart&auto=webp&s=a47fedb66e45feb3ad42f38d7dc47337f63eab89', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?width=1080&crop=smart&auto=webp&s=229e44a576d489ccf898f56ab129f0fbbce9fc3f', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/yuV1Fpr6Fn2bzP9jehfnBxxUjDre9hlcHweF6HHR5RI.jpg?auto=webp&s=30b637452ad7a0599f1b64ad200270809f22d6dc', 'width': 2048}, 'variants': {}}]}
|
|
First results are in. Llama 4 Maverick 17B active / 400B total is blazing fast with MLX on an M3 Ultra — 4-bit model generating 1100 tokens at 50 tok/sec:
| 340 | 2025-04-06T00:32:51 |
Recoil42
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jshwxe
| false | null |
t3_1jshwxe
|
/r/LocalLLaMA/comments/1jshwxe/first_results_are_in_llama_4_maverick_17b_active/
| false | false | 340 |
{'enabled': True, 'images': [{'id': 'Hv6SyeG3ESZjXYFNxEdD3npXizSDRTCXK1l3QmA33yM', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=108&crop=smart&auto=webp&s=bd641215d7f7f23a46beb0bffbc7e5adbea9dd18', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=216&crop=smart&auto=webp&s=265a5472c5e83f50de167aa166a6f825af5ff97a', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=320&crop=smart&auto=webp&s=fa2513f759c6c3080c598f641e45d7b1ca1c9974', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=640&crop=smart&auto=webp&s=3abfffb312e36148337fcbbdd96100c2f53bd88c', 'width': 640}, {'height': 563, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=960&crop=smart&auto=webp&s=20d87471d13d7ed67fbc96d0ef0444edd77d086c', 'width': 960}, {'height': 633, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?width=1080&crop=smart&auto=webp&s=e39c100062e9e800829eeb1f5a1035c2be42f498', 'width': 1080}], 'source': {'height': 704, 'url': 'https://preview.redd.it/1zt2gzrq04te1.png?auto=webp&s=eb0744cd3a7202f5d813937675c2134a94928045', 'width': 1200}, 'variants': {}}]}
|
|||
Interesting.
| 0 | 2025-04-06T00:44:08 |
RetiredApostle
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsi4ii
| false | null |
t3_1jsi4ii
|
/r/LocalLLaMA/comments/1jsi4ii/interesting/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'FfAYDDWHS892ReTYRtLtMW7D4hv14GbNn3705xuDSlA', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/kl17m1sn24te1.png?width=108&crop=smart&auto=webp&s=d8c256361026abd90f240df5ef5f6ba97fbe23d3', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/kl17m1sn24te1.png?width=216&crop=smart&auto=webp&s=3ad2e426d29618e695a39e5e9d3fff6e87d5d707', 'width': 216}, {'height': 421, 'url': 'https://preview.redd.it/kl17m1sn24te1.png?width=320&crop=smart&auto=webp&s=c63f58e3183a6fa888948534b0119efa608e4cc0', 'width': 320}], 'source': {'height': 750, 'url': 'https://preview.redd.it/kl17m1sn24te1.png?auto=webp&s=4abb7af0ffc7ca4ef13bf1e2c4d91a39c0ebd8c1', 'width': 570}, 'variants': {}}]}
|
|||
Need advice for hardware on LLM inferencing and finetuning
| 2 |
I plan to do a couple of projects in the summer such as a omni model chatbot, fine tuning or maybe just a simple RAG that can help retrieve coding libraries and it's documentation and also possibly fine tune a local model on private healthcare data for an upcoming internship. My questions are is this overkill or is it ok to get a really strong workstation for the long-term (My guess is this would survive well for about 6-7 years). Should I downgrade the cpu and RAM? Also should I get the 600W version of the RTX pro 6000 or stick with the 300W version? I also heard infinityband is important for some reason but can't fully remember why. This is currently a general idea of what I aim to purchase on Bizon tech. Current cost is 26k
| 2025-04-06T00:52:31 |
LumiPvp
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsia47
| false | null |
t3_1jsia47
|
/r/LocalLLaMA/comments/1jsia47/need_advice_for_hardware_on_llm_inferencing_and/
| false | false | 2 |
{'enabled': True, 'images': [{'id': 'fgkP2c3KXqoiFoPsW6L-bp6KCDrIZbvaIabGdzhDBSY', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=108&crop=smart&auto=webp&s=8084ed308b5c05d767f8b4aa1cc568356facadff', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=216&crop=smart&auto=webp&s=eff945fc5f28295aef09f7df6b528a7cb3d11ff5', 'width': 216}, {'height': 631, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=320&crop=smart&auto=webp&s=2960ce86ff33bcb9b84cb9434bc3d05052046c43', 'width': 320}, {'height': 1262, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=640&crop=smart&auto=webp&s=d71c51b5909788973c998d8f19b12f48906ed630', 'width': 640}, {'height': 1894, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=960&crop=smart&auto=webp&s=cbe41ed6252f893af6e44ce19b83c184560aec8b', 'width': 960}, {'height': 2131, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?width=1080&crop=smart&auto=webp&s=9bf0662057d7ad650b53190c05fdae0e71a101c1', 'width': 1080}], 'source': {'height': 2131, 'url': 'https://preview.redd.it/ew7t2i3a44te1.png?auto=webp&s=c0df1518976a96cd923c910350c4f59a4082bdd1', 'width': 1080}, 'variants': {}}]}
|
||
RepoText: VSCode extension to export your codebase or specific files as LLM-friendly text
| 1 |
[removed]
| 2025-04-06T01:02:48 |
https://v.redd.it/l5enm0u064te1
|
AlternativeDish5596
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsih5a
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/l5enm0u064te1/DASHPlaylist.mpd?a=1746493385%2COWNmNGFhZDc4NGFmNzIzMWE0NTAxNDRkY2ZjZjJiOGRkYTRlZWRiZGE5YTdkN2JlMTkzMjFlNTU0MWVkYWNiNQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/l5enm0u064te1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/l5enm0u064te1/HLSPlaylist.m3u8?a=1746493385%2CNjcwNWZkMTExYWM2ZWQyYWYwZGY0YTFmMzMzOGQxYzZhZTJhNjk2MDI0YmIzZWU0OTRhYzJiMzA0ZmZiMzFmMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l5enm0u064te1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1810}}
|
t3_1jsih5a
|
/r/LocalLLaMA/comments/1jsih5a/repotext_vscode_extension_to_export_your_codebase/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=108&crop=smart&format=pjpg&auto=webp&s=b45e3f1a317edbfd7da5f4a80c411f2b0fda1e12', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=216&crop=smart&format=pjpg&auto=webp&s=281a72d67cbb2fa68ab9a28beae5a4618fb76898', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=320&crop=smart&format=pjpg&auto=webp&s=02d7e598d92c50f716750e41203edc538b78a79a', 'width': 320}, {'height': 381, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=640&crop=smart&format=pjpg&auto=webp&s=3f05cfef373ce0ac1c8b89a668f246574da9a60f', 'width': 640}, {'height': 572, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=960&crop=smart&format=pjpg&auto=webp&s=bf410525cc023e08d171398d50774da8f82ff142', 'width': 960}, {'height': 644, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c25cd7434c7cd50f92772229eb8468bd8d2f798f', 'width': 1080}], 'source': {'height': 2090, 'url': 'https://external-preview.redd.it/b2t2ZzJ6dDA2NHRlMSJSWczTNilJJXAv8QV7KoWHcCBJtk_zIXIQBSP4tKEI.png?format=pjpg&auto=webp&s=a18dbdbcf31c359bf5e24f195981b815c7573010', 'width': 3502}, 'variants': {}}]}
|
|
What we need now to keep track of ai updates
| 0 | 2025-04-06T01:07:32 |
TheLogiqueViper
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsik6q
| false | null |
t3_1jsik6q
|
/r/LocalLLaMA/comments/1jsik6q/what_we_need_now_to_keep_track_of_ai_updates/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'xEyQmx_bB6GG7Fn4Pwd8HFMi60A3mf_v7SE4JUlDPpg', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/fjrwxfey64te1.jpeg?width=108&crop=smart&auto=webp&s=704616beac4f233616855bb85a950fe449c46e66', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/fjrwxfey64te1.jpeg?width=216&crop=smart&auto=webp&s=86b2fa578c36e4a7200b1bf0e5db842c4d57ec87', 'width': 216}, {'height': 427, 'url': 'https://preview.redd.it/fjrwxfey64te1.jpeg?width=320&crop=smart&auto=webp&s=f24643875b61e8b8f4652a41fd886cbb315f3da5', 'width': 320}], 'source': {'height': 478, 'url': 'https://preview.redd.it/fjrwxfey64te1.jpeg?auto=webp&s=41fe2d6d8643aeea86b08a6118015c4bbe7b4bc8', 'width': 358}, 'variants': {}}]}
|
|||
Best agentic app (cli or clientside webapp) for Gemini 2.5? Rivaling Claude Code?
| 2 |
Right now I'm using Claude Code. Quite good, but very expensive. Looking for something with the same agentic capabilities as Claude code, that can run system commands, browse the web etc (using MCPs or natively) using Gemini 2.5 Pro on openrouter. Any suggestions?
| 2025-04-06T01:08:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsikl0/best_agentic_app_cli_or_clientside_webapp_for/
|
Warm_Iron_273
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsikl0
| false | null |
t3_1jsikl0
|
/r/LocalLLaMA/comments/1jsikl0/best_agentic_app_cli_or_clientside_webapp_for/
| false | false |
self
| 2 | null |
Serverless Llama 4 API to test in the browser!
| 1 | 2025-04-06T01:11:29 |
https://developer.puter.com/tutorials/free-unlimited-llama-api/
|
mitousa
|
developer.puter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsimsc
| false | null |
t3_1jsimsc
|
/r/LocalLLaMA/comments/1jsimsc/serverless_llama_4_api_to_test_in_the_browser/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '9S6B0QEoxX3cKtlgR2f0PGahbw-_Fsnev241egQRlo4', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=108&crop=smart&auto=webp&s=1c814be0dec856e055ef7177b9c2e378161a05d8', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=216&crop=smart&auto=webp&s=3b6a2ebf351ba1f43d10cd1307e208f15932f71d', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=320&crop=smart&auto=webp&s=b4e134a8b041e590a0159ba6d803630b826a744f', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=640&crop=smart&auto=webp&s=e0e2865026bddca322a0437cc09910ebcc775445', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=960&crop=smart&auto=webp&s=aff65b971977dbf140d62b4b5e69149ffc85dd7b', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?width=1080&crop=smart&auto=webp&s=7f08e49da71cc4ab5b900fa6550637943d74ca75', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/LzQOvGF-vhpjnoSXPD1wLFrm6PRXM7ffvegyars-L0Q.jpg?auto=webp&s=b2daddd9d86565da9c7fe9e96485759e0c9dd1b9', 'width': 1280}, 'variants': {}}]}
|
||
Llama 4 Maverick Testing - 400B
| 82 |
Have no idea what they did to this model post training but it's not good. The output for writing is genuinely bad (seriously enough with the emojis) and it misquotes everything. Feels like a step back compared to other recent releases.
| 2025-04-06T01:22:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsitob/llama_4_maverick_testing_400b/
|
YakFull8300
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsitob
| false | null |
t3_1jsitob
|
/r/LocalLLaMA/comments/1jsitob/llama_4_maverick_testing_400b/
| false | false |
self
| 82 | null |
New longform creative writing bench (and Llama-4 results)
| 3 |
**Longform Writing Bench**
This is a new benchmark I've been working on for longform creative writing. With Llama 4 released it seems like a good moment to share.
It's a pretty straightforward benchmark:
1. Model is given a minimal prompt, and is tasked with brainstorming & planning out a short story/novella
2. Reflect on the plan & revise
3. Write a short story/novella over 8x 1000 word turns
It's then assessed with a scoring rubric by sonnet-3.7, scoring each chapter individually then the entire piece.
**Llama-4 Results**
The Llama-4 results are unfortunately *not great*. There are some pretty bad repetition issues that become more pronounced in later chapters. Not sure if this is the model or immature code. But repetition aside, the writing is very formulaic. Here are the samples:
[https://eqbench.com/results/creative-writing-longform/meta-llama\_\_Llama-4-Maverick-17B-128E-Instruct\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/meta-llama__Llama-4-Maverick-17B-128E-Instruct_longform_report.html)
[https://eqbench.com/results/creative-writing-longform/meta-llama\_\_Llama-4-Scout-17B-16E-Instruct\_longform\_report.html](https://eqbench.com/results/creative-writing-longform/meta-llama__Llama-4-Scout-17B-16E-Instruct_longform_report.html)
Also updated the (short form) creative writing leaderboard: [https://eqbench.com/creative\_writing.html](https://eqbench.com/creative_writing.html)
| 2025-04-06T01:26:36 |
https://eqbench.com/creative_writing_longform.html
|
_sqrkl
|
eqbench.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsiwgw
| false | null |
t3_1jsiwgw
|
/r/LocalLLaMA/comments/1jsiwgw/new_longform_creative_writing_bench_and_llama4/
| false | false |
default
| 3 | null |
For CPU-only inference, CPU performance quite often is the bottleneck, and not the memory bandwidth
| 1 |
[removed]
| 2025-04-06T01:31:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsizs0/for_cpuonly_inference_cpu_performance_quite_often/
|
BoysenberryDear6997
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsizs0
| false | null |
t3_1jsizs0
|
/r/LocalLLaMA/comments/1jsizs0/for_cpuonly_inference_cpu_performance_quite_often/
| false | false |
self
| 1 | null |
Stockholm syndrome
| 3 |
Can your local LLM answer this?
Question: *Name the gulf between two countries one of which has its capital associated with a famous psychological syndrome.*
Answer: >! Gulf of Bothnia !<
I tried Gemma-3-12B-it-Q4, Deepseek R1 Distill Llama 8B, exaone-7.8-Q8. None were able to answer.
Deepseek online chat required a clue that it is in Europe. meta.ai answered it once, then the next time(closing the browser and reopen), it gave a wrong answer.
| 2025-04-06T01:32:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1jsj09q/stockholm_syndrome/
|
giant3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsj09q
| false | null |
t3_1jsj09q
|
/r/LocalLLaMA/comments/1jsj09q/stockholm_syndrome/
| false | false |
self
| 3 | null |
Simon Willison: Initial impressions of Llama 4
| 4 | 2025-04-06T01:37:06 |
https://simonwillison.net/2025/Apr/5/llama-4-notes/
|
Creepy-Vast-2529
|
simonwillison.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1jsj3ap
| false | null |
t3_1jsj3ap
|
/r/LocalLLaMA/comments/1jsj3ap/simon_willison_initial_impressions_of_llama_4/
| false | false |
default
| 4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.