title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Qwen time
| 262 |
It's coming
| 2025-04-28T08:45:16 |
ahstanin
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9qsu3
| false | null |
t3_1k9qsu3
|
/r/LocalLLaMA/comments/1k9qsu3/qwen_time/
| false | false | 262 |
{'enabled': True, 'images': [{'id': 'WMOI2lE2TqR2p5JkAX7kJ2JfSEG1MYQG6ozxAqTeCTA', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=108&crop=smart&auto=webp&s=f589bb88f958c94bc372f683427b886f75710259', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=216&crop=smart&auto=webp&s=22f9604dc8ee1690d67a3ad40f7a39a36153e2e7', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=320&crop=smart&auto=webp&s=9564e336387846934d9a2ff00749dc0991ee1719', 'width': 320}, {'height': 286, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=640&crop=smart&auto=webp&s=0a18f7a0fcd4b0212f5a6d46889a055e2aad8ad6', 'width': 640}, {'height': 430, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=960&crop=smart&auto=webp&s=f8dedac534354b742c28b61d7266c8c2d07af853', 'width': 960}, {'height': 484, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?width=1080&crop=smart&auto=webp&s=bc838848e3eed4ed6b871d8eadfdf2576d117df5', 'width': 1080}], 'source': {'height': 1496, 'url': 'https://preview.redd.it/vf4pzi9mgjxe1.png?auto=webp&s=55ffc1f2b64062d7c07337cdfd732568c256773c', 'width': 3338}, 'variants': {}}]}
|
||
Best LLM for german doctor invoices
| 0 |
Is there a pretrained model for german doctor invoices? Or does anyone know a data set for training?The aim is to read in a PDF and generate a json in a defined structure.
Thanks!
| 2025-04-28T08:48:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9qulo/best_llm_for_german_doctor_invoices/
|
sockerockt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9qulo
| false | null |
t3_1k9qulo
|
/r/LocalLLaMA/comments/1k9qulo/best_llm_for_german_doctor_invoices/
| false | false |
self
| 0 | null |
Qwen3 Published 30 seconds ago (Model Weights Available)
| 1,334 |
[https://modelscope.cn/organization/Qwen](https://modelscope.cn/organization/Qwen)
| 2025-04-28T08:54:39 |
random-tomato
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9qxbl
| false | null |
t3_1k9qxbl
|
/r/LocalLLaMA/comments/1k9qxbl/qwen3_published_30_seconds_ago_model_weights/
| false | false | 1,334 |
{'enabled': True, 'images': [{'id': 'WqT2jSt1_aXgCJHx4ePlc5HWRswZuY_05-Vi6iERDRo', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=108&crop=smart&auto=webp&s=0d9a50bab7f0fc0c14256292a6f614ce08e7a5dc', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=216&crop=smart&auto=webp&s=118251b42355290fbe8b5da80531a16365850bb2', 'width': 216}, {'height': 159, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=320&crop=smart&auto=webp&s=ac9c4fe9e91a8c0a31e9d2284602269a5ebc1b60', 'width': 320}, {'height': 318, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=640&crop=smart&auto=webp&s=7159a0fbacf3b156659f562487286b785c7f6484', 'width': 640}, {'height': 477, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=960&crop=smart&auto=webp&s=b00d8ee572f4d556f435380dbf27904ea0629b63', 'width': 960}, {'height': 537, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?width=1080&crop=smart&auto=webp&s=98c00cd40b5ac398a180ef46a4b1094496b87715', 'width': 1080}], 'source': {'height': 944, 'url': 'https://preview.redd.it/472i9pxaijxe1.png?auto=webp&s=166e254eaf9c851c7e7623b096356b68123912d8', 'width': 1898}, 'variants': {}}]}
|
||
UIGEN-T2 7B UI Reasoning Model with Forms, Charts, Checkout, and Animation support
| 32 |
We're releasing our latest and greatest version of UIGEN-T2. This is a culmination of everything we've learned since we started, pulling together our reasoning and UI generation. We have a new format for reasoning, that thinks through UI principles. Our reasoning was generated using a separate model and then transferred. More details are on the model card and the link to it. We've also released our LoRas at each checkpoint, so you don't have to download the entire model, as well as make your own decision about which version you like.
You can download the model here: [GGUF](https://huggingface.co/Tesslate/UIGEN-T2-7B-Q8_0-GGUF) and [16-bit](https://huggingface.co/Tesslate/UIGEN-T2-7B)
In the near future, we plan on using this model as a base for reinforcement learning, but we are looking for resources to do that.
If you want to demo without downloading anything:
[Playground (to test different samples)](https://huggingface.co/spaces/smirki/UIGEN-T2-Playground)
[Visual Artifacts Demo](https://huggingface.co/spaces/smirki/UIGEN-T2-7B-Artifacts-Demo)
And we didn't find any good *(simple)* Artifacts demos, so we released one in Open Source on github.
| 2025-04-28T09:11:25 |
https://v.redd.it/56serzd3jjxe1
|
United-Rush4073
|
/r/LocalLLaMA/comments/1k9r5ij/uigent2_7b_ui_reasoning_model_with_forms_charts/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9r5ij
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/56serzd3jjxe1/DASHPlaylist.mpd?a=1748553091%2COTA3NDU3NzcyM2MyYTIzNTBhNzIxM2M1YjRiMmY0MjZkYWUzYTIwZDg4ZTA2ODAyYTdkZjFlNDU5NmRhNDk0Yw%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/56serzd3jjxe1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/56serzd3jjxe1/HLSPlaylist.m3u8?a=1748553091%2CZDM3NzQ5Mjk3ZDUxZDhkOWM5ZmQ2MjI3ZTg2NjFiZGM5MGUwNTAwYjI1ZDY5MmU4MTBiZjNhNTNiNGUxNGMxYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/56serzd3jjxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1k9r5ij
|
/r/LocalLLaMA/comments/1k9r5ij/uigent2_7b_ui_reasoning_model_with_forms_charts/
| false | false | 32 |
{'enabled': False, 'images': [{'id': 'dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=108&crop=smart&format=pjpg&auto=webp&s=6524664c2cf38a1fc699a037ba628b948804da62', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=216&crop=smart&format=pjpg&auto=webp&s=a26efb67ea70381807cc9a7e65a1d8e888e05de3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=320&crop=smart&format=pjpg&auto=webp&s=193bf668d236c8175360fb5b095b2073219b8ec0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=640&crop=smart&format=pjpg&auto=webp&s=8705b20d3f7f50f85c6bf11dee8692de55c1076e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=960&crop=smart&format=pjpg&auto=webp&s=beefc78af80fa1f8423cf091f46a2a72bb8aac29', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=54cfb777bbfc957e96d9e0a06b413d9733fe0ecc', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dWVleXh5ZDNqanhlMe_UaMURtlZk59cq4QhqIanOb8OkqO54Jklq72w3NG2w.png?format=pjpg&auto=webp&s=d2a44482c2d5b35597d6cbcf37527add7cb44bb1', 'width': 1920}, 'variants': {}}]}
|
|
The best RP with reasoning model yet. | RpR-v3
| 72 |
Gotta get this in before the new Qwen3 drops and that gets all the spotlight! (Will train on Qwen3 as well)
| 2025-04-28T09:19:07 |
https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v3
|
Arli_AI
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9r94h
| false | null |
t3_1k9r94h
|
/r/LocalLLaMA/comments/1k9r94h/the_best_rp_with_reasoning_model_yet_rprv3/
| false | false | 72 |
{'enabled': False, 'images': [{'id': '3WnvHxp8RPe0ug-6_s1NRyIOgC6LTZJcWU1Zp0eiPsM', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=108&crop=smart&auto=webp&s=40be8907d1ebdfb100d6a52edb93af8402184856', 'width': 108}, {'height': 259, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=216&crop=smart&auto=webp&s=9a0a5ce68006d34eb34e1de8d0f4e5e36c6af4ef', 'width': 216}, {'height': 384, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=320&crop=smart&auto=webp&s=2c3a61c2ef6ccfb0a21fbf82c7780e0533d4dc77', 'width': 320}, {'height': 768, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=640&crop=smart&auto=webp&s=2ba2680588f94637bac06ba3efe7cc0845519dba', 'width': 640}, {'height': 1152, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=960&crop=smart&auto=webp&s=975755f43f34d957e4414ada17b27a1f523fd08b', 'width': 960}, {'height': 1296, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?width=1080&crop=smart&auto=webp&s=8591e32ec2e58411c30ac16044125cd675f47447', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://external-preview.redd.it/CsyMx5mh6bFKoVImwhnfvwoNfSyNXoNT92-jFAgPFrs.jpg?auto=webp&s=b1cde68a0f2ac8f5b0f3284425aee2e71fca39f1', 'width': 2560}, 'variants': {}}]}
|
|
Best Approach for Tax Form Data Extraction — Azure vs Fine-Tuned Models?
| 1 |
[removed]
| 2025-04-28T09:20:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9ra3n/best_approach_for_tax_form_data_extraction_azure/
|
Foreign-Fishing-3182
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9ra3n
| false | null |
t3_1k9ra3n
|
/r/LocalLLaMA/comments/1k9ra3n/best_approach_for_tax_form_data_extraction_azure/
| false | false |
self
| 1 | null |
Best Approach for Tax Form Data Extraction — Azure vs Fine-Tuned Models?
| 1 |
[removed]
| 2025-04-28T09:26:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9rcli/best_approach_for_tax_form_data_extraction_azure/
|
Foreign-Fishing-3182
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9rcli
| false | null |
t3_1k9rcli
|
/r/LocalLLaMA/comments/1k9rcli/best_approach_for_tax_form_data_extraction_azure/
| false | false |
self
| 1 | null |
Exllamav3 appears in TabbyAPI (WIP; not mine)
| 19 | 2025-04-28T09:32:23 |
https://github.com/theroyallab/tabbyAPI/commit/c96ec02da17aa8ab665969f8801923509ec4eeb4
|
randomanoni
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9rfo6
| false | null |
t3_1k9rfo6
|
/r/LocalLLaMA/comments/1k9rfo6/exllamav3_appears_in_tabbyapi_wip_not_mine/
| false | false | 19 |
{'enabled': False, 'images': [{'id': 'M9L-nNlOc-FQeqeqiP9BP1RF-eDN9azR5HMi4HVBJhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=108&crop=smart&auto=webp&s=48bdc53827b8a88ad7d1319cbac71bde58bc1d9a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=216&crop=smart&auto=webp&s=33706edc004339effb40b6332e5eaf307fa29bc8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=320&crop=smart&auto=webp&s=1f948023ebb7309a2f2daa075fc69c1f8e5915c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=640&crop=smart&auto=webp&s=4f5f995f2370cce2b07bcb338d67cac3add3fc96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=960&crop=smart&auto=webp&s=bc067ddce7a5c51b70e97701d35784accfc2eec6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?width=1080&crop=smart&auto=webp&s=3f6ef09bb712c8e707dd7b5529eed2ba48baf0b7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zsi-FkMXuQNtXXQADRyvsXni-p4s6NoODmHIwXaU864.jpg?auto=webp&s=9b4d4dc63301a364bf5f64ea23b7c214a55f6e1e', 'width': 1200}, 'variants': {}}]}
|
||
Qwen3 ReadMe.md
| 239 |
# Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
* **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
* **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
* **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
* **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
* **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
# Model Overview
**Qwen3-0.6B** has the following features:
* Type: Causal Language Models
* Training Stage: Pretraining & Post-training
* Number of Parameters: 0.6B
* Number of Paramaters (Non-Embedding): 0.44B
* Number of Layers: 28
* Number of Attention Heads (GQA): 16 for Q and 8 for KV
* Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
# witching Between Thinking and Non-Thinking Mode
Tip
The `enable_thinking` switch is also available in APIs created by vLLM and SGLang. Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
# enable_thinking=True
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
Note
For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](https://gist.github.com/ibnbd/5ec32ce14bde8484ca466b7d77e18764#best-practices) section.
# enable_thinking=False
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
Note
For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](https://gist.github.com/ibnbd/5ec32ce14bde8484ca466b7d77e18764#best-practices) section.
# Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
>
# Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
# Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
* For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
* For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
* For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
* **Math Problems**: Include "Please reason step by step, and put your final answer within \\boxed{}." in the prompt.
* **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
# Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
From: [https://gist.github.com/ibnbd/5ec32ce14bde8484ca466b7d77e18764#switching-between-thinking-and-non-thinking-mode](https://gist.github.com/ibnbd/5ec32ce14bde8484ca466b7d77e18764#switching-between-thinking-and-non-thinking-mode)
| 2025-04-28T09:45:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9rm65/qwen3_readmemd/
|
sunshinecheung
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9rm65
| false | null |
t3_1k9rm65
|
/r/LocalLLaMA/comments/1k9rm65/qwen3_readmemd/
| false | false |
self
| 239 | null |
Local A.I. turning mockup image of UI into code?
| 1 |
[removed]
| 2025-04-28T09:51:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9rp0f/local_ai_turning_mockup_image_of_ui_into_code/
|
wakeoflove
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9rp0f
| false | null |
t3_1k9rp0f
|
/r/LocalLLaMA/comments/1k9rp0f/local_ai_turning_mockup_image_of_ui_into_code/
| false | false |
self
| 1 | null |
Any good apps on mac for deep research and web search for local llms
| 2 |
I tried Anything LlM , but the websearch function didn’t work with a lot of models Except for llama 3 and some other models. Are there any other apps that work with websearch?
| 2025-04-28T10:28:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9s8h2/any_good_apps_on_mac_for_deep_research_and_web/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9s8h2
| false | null |
t3_1k9s8h2
|
/r/LocalLLaMA/comments/1k9s8h2/any_good_apps_on_mac_for_deep_research_and_web/
| false | false |
self
| 2 | null |
Llama 3.3 70b on 2xRTX 6000 ADA + VLLM
| 0 |
Hey guys, i need to speed up this config, 128k context window, AWQ version - looks like slow a bit. Maybe change to 6bit GGUF?
Now i have cc 20-30t/s; is there any chance to speed this up a bit?
| 2025-04-28T10:33:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9sbll/llama_33_70b_on_2xrtx_6000_ada_vllm/
|
kontostamas
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9sbll
| false | null |
t3_1k9sbll
|
/r/LocalLLaMA/comments/1k9sbll/llama_33_70b_on_2xrtx_6000_ada_vllm/
| false | false |
self
| 0 | null |
Qwen 3 will apparently have a 235B parameter model
| 362 | 2025-04-28T10:35:46 |
queendumbria
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9scp3
| false | null |
t3_1k9scp3
|
/r/LocalLLaMA/comments/1k9scp3/qwen_3_will_apparently_have_a_235b_parameter_model/
| false | false | 362 |
{'enabled': True, 'images': [{'id': 'VLCXEUBiTewCyHy_LzyO3Q_pGw5aA2p536COCN-2C9s', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/0gfy4b2c0kxe1.png?width=108&crop=smart&auto=webp&s=fc47213f61ed4cdc7fdbd896aa0f19a878b2bd30', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/0gfy4b2c0kxe1.png?width=216&crop=smart&auto=webp&s=0e46b57544278e74c7b7f1cb7a748aace4a32123', 'width': 216}, {'height': 83, 'url': 'https://preview.redd.it/0gfy4b2c0kxe1.png?width=320&crop=smart&auto=webp&s=4bb4c6640d5a0303bfc43389fbbf4a5f446c172f', 'width': 320}, {'height': 166, 'url': 'https://preview.redd.it/0gfy4b2c0kxe1.png?width=640&crop=smart&auto=webp&s=178174e5467f8f4871102fb3de060ecfd2097ac4', 'width': 640}], 'source': {'height': 232, 'url': 'https://preview.redd.it/0gfy4b2c0kxe1.png?auto=webp&s=dd0d19d465947ba40a659c531253d869030ac9d0', 'width': 893}, 'variants': {}}]}
|
|||
It might seem crazy what I'm about to say...
| 1 |
[removed]
| 2025-04-28T10:49:50 |
AssignmentPowerful83
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9skfe
| false | null |
t3_1k9skfe
|
/r/LocalLLaMA/comments/1k9skfe/it_might_seem_crazy_what_im_about_to_say/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Kh2OvOb2n4AAWbtGV_UbHzBsxjypVEYitxkjeY-RHY0', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=108&crop=smart&auto=webp&s=e1e580b0b81085140a4b802755c8fc0e2e9774b0', 'width': 108}, {'height': 66, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=216&crop=smart&auto=webp&s=c158607e597369963ae93f963006280c215c01a4', 'width': 216}, {'height': 97, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=320&crop=smart&auto=webp&s=be9e2a219746c697b2b2bc2b8239da1c9a252e54', 'width': 320}, {'height': 195, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=640&crop=smart&auto=webp&s=213ef4b7e84d52987a11ff37913912b5d696c8d0', 'width': 640}, {'height': 293, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=960&crop=smart&auto=webp&s=84b0f30398e7dd06ece150c8eff156bf9b452b2f', 'width': 960}, {'height': 330, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?width=1080&crop=smart&auto=webp&s=9737c5973a4bbad3d53281e52ff46ffd2e952122', 'width': 1080}], 'source': {'height': 330, 'url': 'https://preview.redd.it/jf4blbbw2kxe1.png?auto=webp&s=ac0f0009ce04b22e7b199c31612a3afb1e3fdc36', 'width': 1080}, 'variants': {}}]}
|
||
Qwen3 released tonight?
| 127 | 2025-04-28T10:53:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9smmz/qwen3_released_tonight/
|
sunshinecheung
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9smmz
| false | null |
t3_1k9smmz
|
/r/LocalLLaMA/comments/1k9smmz/qwen3_released_tonight/
| false | false | 127 | null |
||
Qwen 3 W.I.P.
| 180 | 2025-04-28T11:08:04 |
jacek2023
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9sve2
| false | null |
t3_1k9sve2
|
/r/LocalLLaMA/comments/1k9sve2/qwen_3_wip/
| false | false | 180 |
{'enabled': True, 'images': [{'id': 'm4F2DZjUDmGpIhyd6XlGDOclzGpEPPEwIEi9rYha3co', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?width=108&crop=smart&auto=webp&s=fc7d7080c9eee35b540c5d2ee3ecf360e6a1cca0', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?width=216&crop=smart&auto=webp&s=2ad8b796e007aff80609a09d769ca119d8013f8a', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?width=320&crop=smart&auto=webp&s=0df509efc498946a2d0b9fa20a2f536c8eedaff6', 'width': 320}, {'height': 200, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?width=640&crop=smart&auto=webp&s=f6863f6aeab59f848d4f63817a1e19f611820abf', 'width': 640}, {'height': 300, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?width=960&crop=smart&auto=webp&s=70b148eb441152e322e09b2b293696098eef73cf', 'width': 960}], 'source': {'height': 317, 'url': 'https://preview.redd.it/grvs8xk56kxe1.png?auto=webp&s=90a037a9e57f9e510a4ea1aa3555f4480fa8963e', 'width': 1013}, 'variants': {}}]}
|
|||
The Llama.cpp library has been ported to the ShelfMC platform
| 1 |
[removed]
| 2025-04-28T11:08:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9svtd/the_llamacpp_library_has_been_ported_to_the/
|
ulianownw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9svtd
| false | null |
t3_1k9svtd
|
/r/LocalLLaMA/comments/1k9svtd/the_llamacpp_library_has_been_ported_to_the/
| false | false |
self
| 1 | null |
The Llama.cpp library has been ported to the ShelfMC platform
| 1 |
[removed]
| 2025-04-28T11:10:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9swl4/the_llamacpp_library_has_been_ported_to_the/
|
ulianownw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9swl4
| false | null |
t3_1k9swl4
|
/r/LocalLLaMA/comments/1k9swl4/the_llamacpp_library_has_been_ported_to_the/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'sbiprA2BShK4BbvOgZ9xlD3vhgrZpIzevYsl69L3KOc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=108&crop=smart&auto=webp&s=669249b367a0a138eb48352068db7a1e63cc24d3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=216&crop=smart&auto=webp&s=23f6e88b71b36f17507209175fc1cbd7e56c4ed8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=320&crop=smart&auto=webp&s=1ace773594eeaf23f15126ea9395be6cbded6a36', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?auto=webp&s=aeb31b60803fceaf9963b9269d3fa98653229e17', 'width': 480}, 'variants': {}}]}
|
The Llama.cpp library has been ported to the ShelfMC platform
| 1 |
[removed]
| 2025-04-28T11:11:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9sxk9/the_llamacpp_library_has_been_ported_to_the/
|
ulianownw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9sxk9
| false | null |
t3_1k9sxk9
|
/r/LocalLLaMA/comments/1k9sxk9/the_llamacpp_library_has_been_ported_to_the/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'sbiprA2BShK4BbvOgZ9xlD3vhgrZpIzevYsl69L3KOc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=108&crop=smart&auto=webp&s=669249b367a0a138eb48352068db7a1e63cc24d3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=216&crop=smart&auto=webp&s=23f6e88b71b36f17507209175fc1cbd7e56c4ed8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=320&crop=smart&auto=webp&s=1ace773594eeaf23f15126ea9395be6cbded6a36', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?auto=webp&s=aeb31b60803fceaf9963b9269d3fa98653229e17', 'width': 480}, 'variants': {}}]}
|
So close.
| 139 | 2025-04-28T11:54:44 |
Porespellar
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9tobs
| false | null |
t3_1k9tobs
|
/r/LocalLLaMA/comments/1k9tobs/so_close/
| false | false | 139 |
{'enabled': True, 'images': [{'id': 'xf2CSwC32LhYoCUbOtK3qZ8vUrkfHEUcPdlwl8y6emc', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=108&crop=smart&auto=webp&s=5ebc94ef7e16bd424dc24439961fb63c04e48a74', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=216&crop=smart&auto=webp&s=6dab94b23fa4dc062032716088a1e3d8f3e21441', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=320&crop=smart&auto=webp&s=8557fd55d1e9f29046124b6ef172495b7b22846c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=640&crop=smart&auto=webp&s=3f77ebd6f7657d0b4123cae4ab3bd862d25c7d37', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=960&crop=smart&auto=webp&s=057aee6c0d98391444393354ff5bd8138eb90bc5', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?width=1080&crop=smart&auto=webp&s=f2ba284117a5f0efec523a39114071108d3b8fe3', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://preview.redd.it/gz62fdbhekxe1.jpeg?auto=webp&s=a2e67afd9ca0fd3fa9afa064b930a6f077ad95fc', 'width': 1125}, 'variants': {}}]}
|
|||
Looks like there even 235 billion parameter qwen 3 model
| 1 | 2025-04-28T12:09:25 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9tydd
| false | null |
t3_1k9tydd
|
/r/LocalLLaMA/comments/1k9tydd/looks_like_there_even_235_billion_parameter_qwen/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'onNkmgvXioe4Uky-5Y0BozlpAZCj6eDMe-0LyZl3n5E', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=108&crop=smart&auto=webp&s=947fdcedb8d5e1372b7f282ed90d59b35a7137dd', 'width': 108}, {'height': 48, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=216&crop=smart&auto=webp&s=1b594a710be03e2f09c890aa7e10470011b2d360', 'width': 216}, {'height': 72, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=320&crop=smart&auto=webp&s=50f2f818734a8ec1e339744d37682dd23b2136e1', 'width': 320}, {'height': 144, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=640&crop=smart&auto=webp&s=ac1266b5aa34eeabb8bbd8870279cb6be28288e3', 'width': 640}, {'height': 217, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=960&crop=smart&auto=webp&s=28435b49c89311df0289608d611abe342cc6d28c', 'width': 960}, {'height': 244, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?width=1080&crop=smart&auto=webp&s=2a78e0f9ce7d8972677ceaf1c5d0ba1204871008', 'width': 1080}], 'source': {'height': 273, 'url': 'https://preview.redd.it/mz3jgsm3hkxe1.jpeg?auto=webp&s=b76113be609f0020f6156d2c4538eb8c4f52beee', 'width': 1206}, 'variants': {}}]}
|
|||
How do you think Qwen 3 will perform on benchmark tests?
| 1 |
[removed]
| 2025-04-28T12:09:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9tykm/how_do_you_think_qwen_3_will_perform_on_benchmark/
|
CreepyMan121
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9tykm
| false | null |
t3_1k9tykm
|
/r/LocalLLaMA/comments/1k9tykm/how_do_you_think_qwen_3_will_perform_on_benchmark/
| false | false |
self
| 1 | null |
Qwen 3 is now on huggingface
| 84 |
# Qwen3-0.6B-FP8
#
[https://huggingface.co/Qwen/Qwen3-0.6B-FP8](https://huggingface.co/Qwen/Qwen3-0.6B-FP8)
| 2025-04-28T12:31:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9udty/qwen_3_is_now_on_huggingface/
|
touhidul002
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9udty
| false | null |
t3_1k9udty
|
/r/LocalLLaMA/comments/1k9udty/qwen_3_is_now_on_huggingface/
| false | false |
self
| 84 |
{'enabled': False, 'images': [{'id': 'EZNSeDCWB3hFW8wrfvGpD2_rxPT21CLaSF68ZqCEugQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=108&crop=smart&auto=webp&s=261df30622ba25c67cd4ff4aa368e18ced6bce6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=216&crop=smart&auto=webp&s=23d3474788c3df63bdd98c6884414eef43bc99d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=320&crop=smart&auto=webp&s=2f101ac1e58997ffb86462f87043cbd3c46e098f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=640&crop=smart&auto=webp&s=73a8ad5355e21a8aa7dedb528b43235317f42bc5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=960&crop=smart&auto=webp&s=31e23b5d06303f11ca018299194f78d900ae0c42', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?width=1080&crop=smart&auto=webp&s=ca770fac4d8ae5c37602c674e3c2d8d5f6ff6149', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-pNhz1TsiDrEnLKJvHY4jBDTdKx7I0v-A63hNckDqkk.jpg?auto=webp&s=252e3f809558dea6c11af12cd0963615d657baf6', 'width': 1200}, 'variants': {}}]}
|
It's happening!
| 522 |
[https://huggingface.co/organizations/Qwen/activity/all](https://huggingface.co/organizations/Qwen/activity/all)
| 2025-04-28T12:34:18 |
DuckyBlender
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9ufo8
| false | null |
t3_1k9ufo8
|
/r/LocalLLaMA/comments/1k9ufo8/its_happening/
| false | false | 522 |
{'enabled': True, 'images': [{'id': '9BvyELzJe80sJHRhT6-CY1jT3EchGGSz0ULhoa4Y6tM', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/mwisik2ilkxe1.png?width=108&crop=smart&auto=webp&s=ab1ebbad73a6c3db1fb9d3f01248fea53c150e0a', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/mwisik2ilkxe1.png?width=216&crop=smart&auto=webp&s=a6194c91b9401a2eae64594280b44cbe19c4e691', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/mwisik2ilkxe1.png?width=320&crop=smart&auto=webp&s=96b6980f6a7b054ad8f1c1812253f2a6b82d5456', 'width': 320}], 'source': {'height': 485, 'url': 'https://preview.redd.it/mwisik2ilkxe1.png?auto=webp&s=d2a264f5745df684d56da0eab944f51d787a65f1', 'width': 415}, 'variants': {}}]}
|
||
Which model do you guys use on openrouter directly or through API
| 2 |
.
| 2025-04-28T13:01:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9uzkt/which_model_do_you_guys_use_on_openrouter/
|
Namra_7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9uzkt
| false | null |
t3_1k9uzkt
|
/r/LocalLLaMA/comments/1k9uzkt/which_model_do_you_guys_use_on_openrouter/
| false | false |
self
| 2 | null |
Llama may release new reasoning model and other features with llama 4.1 models tomorrow
| 202 | 2025-04-28T13:04:05 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9v1bp
| false | null |
t3_1k9v1bp
|
/r/LocalLLaMA/comments/1k9v1bp/llama_may_release_new_reasoning_model_and_other/
| false | false |
default
| 202 |
{'enabled': True, 'images': [{'id': 'zua4wxjuqkxe1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?width=108&crop=smart&auto=webp&s=e4f423aaf7be2d3338d1b5bb0c816f99d97e75a4', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?width=216&crop=smart&auto=webp&s=629127d715f50195e88a35d308a378ca31eaf498', 'width': 216}, {'height': 367, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?width=320&crop=smart&auto=webp&s=0252fe76457f54573050751195d6f457b6502b3e', 'width': 320}, {'height': 735, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?width=640&crop=smart&auto=webp&s=741de5b0707aaa5ca42c5eef34cbff16051e3f77', 'width': 640}, {'height': 1102, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?width=960&crop=smart&auto=webp&s=8e56363cf103da6e955bfa5db8e8b0c080a8713d', 'width': 960}], 'source': {'height': 1236, 'url': 'https://preview.redd.it/zua4wxjuqkxe1.jpeg?auto=webp&s=9dd44ff21deab7aa2b983892d10f94d740772f06', 'width': 1076}, 'variants': {}}]}
|
||
HumvaAI’s Video Avatars, What’s Powering This Thing?
| 1 |
[removed]
| 2025-04-28T13:14:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9v9bo/humvaais_video_avatars_whats_powering_this_thing/
|
ObjectiveTeary
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9v9bo
| false | null |
t3_1k9v9bo
|
/r/LocalLLaMA/comments/1k9v9bo/humvaais_video_avatars_whats_powering_this_thing/
| false | false |
self
| 1 | null |
MoE and "Thinking": please stop!
| 0 |
Whenever something stupid comes out and creates hype, companies start producing and spending time and money producing it, not for real gains, but for popularity.
Instead of having dense models that are increasingly intelligent, we invented "thinking", which is nothing more than reflecting information in context (valuable and expensive context) to then produce something acceptable. Yes, there are gains, but the losses are incalculable. All the energy, money and effort of the community should be focused on producing really intelligent models.
Now, the MoE is back, the dumbest logic I've ever seen: spend resources as if a model were 100B, run as if it were an 8, and have a performance of 14B (?????????). Why not invest time, energy and money in a 14B that actually works?
Please guys, only create hype about really incredible things, remember that until we have models with the hyped features, that moment may have already passed.
| 2025-04-28T13:22:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9verv/moe_and_thinking_please_stop/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9verv
| false | null |
t3_1k9verv
|
/r/LocalLLaMA/comments/1k9verv/moe_and_thinking_please_stop/
| false | false |
self
| 0 | null |
New AI App HugstonOne
| 0 |
Hi everyone,
I recently built HugstonOne, a simple but powerful desktop app that lets you run LLM and other GGUF models locally without internet, using your own CPU/GPU. (Limited to cpu for now) Is privacy friendly, no server no background services.
Some things I focused on:
Lightweight install
Supports model loading (GGUF format) easily
Windows executable available
I built this mainly because I wanted a faster, no-cloud LLM experience for my own projects.
You can download it here: [https://hugston.com/uploads/software/HugstonOne\_cpu%20Setup%201.0.0.exe](https://hugston.com/uploads/software/HugstonOne_cpu%20Setup%201.0.0.exe)
Would love your feedback, ideas, or improvements!
(PS: still working on improving as in beta.)
| 2025-04-28T13:33:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9vne4/new_ai_app_hugstonone/
|
Trilogix
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9vne4
| false | null |
t3_1k9vne4
|
/r/LocalLLaMA/comments/1k9vne4/new_ai_app_hugstonone/
| false | false |
self
| 0 | null |
Looking to set up my PoC with open source LLM available to the public. What are my choices?
| 1 |
[removed]
| 2025-04-28T13:47:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9vylc/looking_to_set_up_my_poc_with_open_source_llm/
|
YouWillNeeverFindOut
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9vylc
| false | null |
t3_1k9vylc
|
/r/LocalLLaMA/comments/1k9vylc/looking_to_set_up_my_poc_with_open_source_llm/
| false | false |
self
| 1 | null |
DEEPSEEK R2 ANNOUNCED!
| 0 | 2025-04-28T13:51:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9w1zh/deepseek_r2_announced/
|
ybdave
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9w1zh
| false | null |
t3_1k9w1zh
|
/r/LocalLLaMA/comments/1k9w1zh/deepseek_r2_announced/
| false | false | 0 | null |
||
What's happening over at Qwen?
| 37 |
Looks like something weird is going on over at Qwen. All their models were listed on their Org page on HF five minutes ago and now they're *all* gone. [https://huggingface.co/organizations/Qwen/activity/models](https://huggingface.co/organizations/Qwen/activity/models)
| 2025-04-28T14:07:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9weth/whats_happening_over_at_qwen/
|
Sindre_Lovvold
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9weth
| false | null |
t3_1k9weth
|
/r/LocalLLaMA/comments/1k9weth/whats_happening_over_at_qwen/
| false | false |
self
| 37 |
{'enabled': False, 'images': [{'id': 'eUe17voVkF4rUxp20J0CXK9LZ1ckV3728roXC7v8pVo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=108&crop=smart&auto=webp&s=1e4e0581cca8cdee9d1908117d0d6678ae7c2d82', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=216&crop=smart&auto=webp&s=15903fab82711b1e4f9225aae8f55f60446cbb4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=320&crop=smart&auto=webp&s=cc3ad072a4d1ac7363ec2ce1d38eeebcddc17cc0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=640&crop=smart&auto=webp&s=3655edd78b8c90b9f09df99ecac68026ea1d38eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=960&crop=smart&auto=webp&s=14b941079646d3a0a86f2816f5c2da3e253f8daa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=1080&crop=smart&auto=webp&s=74b240604d45469b760105bb3b6ea40d6cfb09a6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?auto=webp&s=1f3e45fb11550c650a00aecbcec753c205afd580', 'width': 1200}, 'variants': {}}]}
|
No .gguf but a "NoisyRollout" paper - at this point are they just trolling us ?
| 1 | 2025-04-28T14:21:34 |
Firm_House6462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9wqsv
| false | null |
t3_1k9wqsv
|
/r/LocalLLaMA/comments/1k9wqsv/no_gguf_but_a_noisyrollout_paper_at_this_point/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'wjLIo74Pm4lWgVjBZ2e55aSg2B6AdrdsG9j6Lu-UKMI', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=108&crop=smart&auto=webp&s=0b29129f814d9c6833e1c5df51894b3bae511675', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=216&crop=smart&auto=webp&s=ac9687e5a4e81991f3e2c9f9f095810a5acf15b0', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=320&crop=smart&auto=webp&s=4e3ab7ca93d17eeb217b1d8dc334edcab87040ad', 'width': 320}, {'height': 232, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=640&crop=smart&auto=webp&s=da487c3ab954c5a5341e4eb077cfb55750f441bb', 'width': 640}, {'height': 348, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=960&crop=smart&auto=webp&s=848d4728f10badcdc531a614310f49a83fe519f2', 'width': 960}, {'height': 392, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?width=1080&crop=smart&auto=webp&s=12c8f2050ac43b2e6955167e573fd1550b2500da', 'width': 1080}], 'source': {'height': 956, 'url': 'https://preview.redd.it/903fzwz74lxe1.jpeg?auto=webp&s=09dee9a77dbeea549695cdb6df56964a30909efb', 'width': 2632}, 'variants': {}}]}
|
|||
Update to llama-server-cli.py. A user-friendly tool for managing, and running, llama.cpp's llama-server with multiple configuration profiles.
| 10 |
Hi, I just wanted to share some updates to my tool and clarify the purpose.
The purpose of the tool is *not* to be a replacement for llama-server. It is meant to run along side your llama-server executable, and deal with all the interaction for you as a wrapper. Similar to what Ollama do, but not the same.
The usage is simple:
1. Install the pip packages for the tool.
2. Simply place the llama-server-cli.py file next to your llama-server executable.
3. Run it with ` python llama-server-cli.py`
4. Use the interface to point it at the gguf file and start the server with the default parameters.
Any change made to the config while a model is loaded will automatically reload the model with the new settings, so no need to manually reload it every time.
It will act as a proxy for your llama-server when using the API server, acting as a OpenAI-Compatible API (still needs some work).
It also got support for profiles, where each profile got its own model and parameter settings. The API server allow you to chat with a profile, which will automatically change the profile you are communicating with, and this will load the model with the parameters.
I mostly made this tool to for my own use of llama.cpp's llama-server, and I share it in case it is useful for someone else. Currently provided "as is".
You can find it here: https://github.com/R-Dson/llama-server-cli.py.
| 2025-04-28T14:24:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9wtaf/update_to_llamaserverclipy_a_userfriendly_tool/
|
robiinn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9wtaf
| false | null |
t3_1k9wtaf
|
/r/LocalLLaMA/comments/1k9wtaf/update_to_llamaserverclipy_a_userfriendly_tool/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'rdbGN2g07Zi5j5CEQVWmLT65hZ656qCpoNmAG8QIoYQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=108&crop=smart&auto=webp&s=a3fedef7551b25e5c4e1f2d9bfa635bc30e512ae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=216&crop=smart&auto=webp&s=ab147eb256955d359cc908000b2f75cc0e7c2dd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=320&crop=smart&auto=webp&s=ff6621dbe04f25df1a2b9837b1b411afdaf7a6d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=640&crop=smart&auto=webp&s=870bfb9ed69d69ea803d3ee031217f996d5edf86', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=960&crop=smart&auto=webp&s=857af632f700ba7b5e93c3de4860cb2b1c2b55d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?width=1080&crop=smart&auto=webp&s=24a58d32d9e550d2d64c8fd38b5953f9fd27b3f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DIAP6HkmwWIcuaFY7nuwP4fuVHsx1hydjq905eKeUIY.jpg?auto=webp&s=06faff590332d4c8fb4281545ad838f18f837941', 'width': 1200}, 'variants': {}}]}
|
Nvidia is giving us more VRAM, suggests new leak, but you’ll need to wait for it
| 31 | 2025-04-28T14:31:30 |
https://www.pcguide.com/news/nvidia-is-giving-us-more-vram-suggests-new-leak-but-youll-need-to-wait-for-it/
|
chillinewman
|
pcguide.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9wz3j
| false | null |
t3_1k9wz3j
|
/r/LocalLLaMA/comments/1k9wz3j/nvidia_is_giving_us_more_vram_suggests_new_leak/
| false | false | 31 |
{'enabled': False, 'images': [{'id': 'EIh0gzso86tTzqFq8waAqkvltNZ2q25I0eJZ-GxKSTs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=108&crop=smart&auto=webp&s=f618ba2a9db481b52680c66f8c639d0baca70a0e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=216&crop=smart&auto=webp&s=2a59c02810b5bda4994c22a1d11330d1ea3c9b2f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=320&crop=smart&auto=webp&s=faf1848642fca37ed1f83d7bd924f6e0400ec9d4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=640&crop=smart&auto=webp&s=3f69bcb33dbd4923abcf195f05fae8c45ed8d983', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=960&crop=smart&auto=webp&s=4233609bc368289c83637ad863e787085c33f9a7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?width=1080&crop=smart&auto=webp&s=a99a5e7c32ee0b904295f7aa5fa5897491966884', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/zAYMYKBuM2NBCbwPr9sJYoyOfqXNuBYGk_vcINX48yk.jpg?auto=webp&s=f33f46907338833b9f29e2b42e493c3a693540b5', 'width': 1200}, 'variants': {}}]}
|
||
which model is best for refining/fixing artifacts of an image? without prompt.
| 1 |
title
| 2025-04-28T14:36:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9x3og/which_model_is_best_for_refiningfixing_artifacts/
|
aman167k
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9x3og
| false | null |
t3_1k9x3og
|
/r/LocalLLaMA/comments/1k9x3og/which_model_is_best_for_refiningfixing_artifacts/
| false | false |
self
| 1 | null |
TPS benchmarks for pedestrian hardware
| 1 |
Hey folks,
I run ollama on pedestrian hardware. One of those mini PCs with integrated graphics.
I would love to see what see what sort of TPS people get on popular models (eg, anything on ollama.com) on ”very consumer” hardware. Think CPU only, or integrated graphics chips
Most numbers I see involve discrete GPUs. I’d like to compare my setup with other similar setups, just to see what’s possible, confirm I’m getting the best I can, or not.
Has anyone compiled such benchmarks before?
| 2025-04-28T14:51:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9xgb3/tps_benchmarks_for_pedestrian_hardware/
|
irishgeek
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9xgb3
| false | null |
t3_1k9xgb3
|
/r/LocalLLaMA/comments/1k9xgb3/tps_benchmarks_for_pedestrian_hardware/
| false | false |
self
| 1 | null |
Qwen3: self-hosting guide with vLLM and SGLang
| 0 | 2025-04-28T14:54:07 |
https://www.linkedin.com/pulse/qwen3-self-hosting-guide-vllm-sglang-maksym-huczynski-i4v2f/
|
secopsml
|
linkedin.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9xibs
| false | null |
t3_1k9xibs
|
/r/LocalLLaMA/comments/1k9xibs/qwen3_selfhosting_guide_with_vllm_and_sglang/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]}
|
||
MoEs are the future!
| 0 |
Speak up guys! I was looking forward to the arrival of my 5090, I upgraded from a 4060. Now I can run the Llama 4 100B, and it almost has the same performance as the Gemma 3 27B that I was already able to run. I'm very happy, I love MoEs, they are a very clever solution for selling GPUs!
| 2025-04-28T15:05:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9xsda/moes_are_the_future/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9xsda
| false | null |
t3_1k9xsda
|
/r/LocalLLaMA/comments/1k9xsda/moes_are_the_future/
| false | false |
self
| 0 | null |
Qwen3 hasn't been released yet, but mlx already supports running it
| 1 | 2025-04-28T15:16:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9y1rp/qwen3_hasnt_been_released_yet_but_mlx_already/
|
Dr_Karminski
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9y1rp
| false | null |
t3_1k9y1rp
|
/r/LocalLLaMA/comments/1k9y1rp/qwen3_hasnt_been_released_yet_but_mlx_already/
| false | false | 1 | null |
||
Qwen3 hasn't been released yet, but mlx already supports running it
| 134 |
What a beautiful day, folks!
| 2025-04-28T15:17:38 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9y2rq
| false | null |
t3_1k9y2rq
|
/r/LocalLLaMA/comments/1k9y2rq/qwen3_hasnt_been_released_yet_but_mlx_already/
| false | false | 134 |
{'enabled': True, 'images': [{'id': 'HQonpywUnr_vOG-khuYSXhkXs3ofaGfVhHf94BdtT9Q', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/oj214brjelxe1.png?width=108&crop=smart&auto=webp&s=3920696bf2946526f8f2761a66b59ad3c3017b47', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/oj214brjelxe1.png?width=216&crop=smart&auto=webp&s=c7d3773d3a2eaecdea82019e35e012c161e0a5af', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/oj214brjelxe1.png?width=320&crop=smart&auto=webp&s=fc6ff61648fa8c2954077b70909ce6b4d400f567', 'width': 320}], 'source': {'height': 259, 'url': 'https://preview.redd.it/oj214brjelxe1.png?auto=webp&s=85b2b5043a5d71a02da0a0113be4cce992f126c0', 'width': 599}, 'variants': {}}]}
|
||
Real Qwen 3 GGUFs?
| 68 | 2025-04-28T15:33:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9ygcx/real_qwen_3_ggufs/
|
AlexBefest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9ygcx
| false | null |
t3_1k9ygcx
|
/r/LocalLLaMA/comments/1k9ygcx/real_qwen_3_ggufs/
| false | false | 68 |
{'enabled': False, 'images': [{'id': 'WOP8a3R3Zj_AlaTjOBWIEvgLA9Wc0Ag1OSHQwrN6K8w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=108&crop=smart&auto=webp&s=96276e0dc0f36d0444e71e5328ccedff79e67232', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=216&crop=smart&auto=webp&s=1c79fa63f9b577b59229bc1ffb6ee8dbf98f4618', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=320&crop=smart&auto=webp&s=692dc12b158e437da11268ef0851f612bcf30853', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=640&crop=smart&auto=webp&s=bf8fbb2c643092a7ce4442b4ac845accc7e8ae41', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=960&crop=smart&auto=webp&s=6586a797984ba7f551390cdd3a933758a8fa026b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?width=1080&crop=smart&auto=webp&s=1c6edff281d561b6cf859a3578a39919cf70c3d6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2dbWk1faU88RM4mWCeW541cjKo7LtgCOrW9f8Xj5UWA.jpg?auto=webp&s=f9f0f7f47623d9e506696f51101ef2bc26d1574c', 'width': 1200}, 'variants': {}}]}
|
||
I dont know what im doing..
| 1 |
[removed]
| 2025-04-28T15:48:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9ytjl/i_dont_know_what_im_doing/
|
Turbulent_Break2959
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9ytjl
| false | null |
t3_1k9ytjl
|
/r/LocalLLaMA/comments/1k9ytjl/i_dont_know_what_im_doing/
| false | false | 1 | null |
|
Qwen 3 is available in LM Studio !!!!
| 19 | 2025-04-28T15:50:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9yvdg/qwen_3_is_available_in_lm_studio/
|
josho2001
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9yvdg
| false | null |
t3_1k9yvdg
|
/r/LocalLLaMA/comments/1k9yvdg/qwen_3_is_available_in_lm_studio/
| false | false | 19 | null |
||
Gemma27B plays dnd with chatgpt as DM
| 5 |
Gemma27B plays dnd with chatgpt as DM, Day 2. Will run until chatgpt limits are hit.
https://m.twitch.tv/cm0rduck
| 2025-04-28T15:52:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9yxlt/gemma27b_plays_dnd_with_chatgpt_as_dm/
|
Spare-Ad-4810
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9yxlt
| false | null |
t3_1k9yxlt
|
/r/LocalLLaMA/comments/1k9yxlt/gemma27b_plays_dnd_with_chatgpt_as_dm/
| false | false |
self
| 5 |
{'enabled': False, 'images': [{'id': 'LrUda4LLPoqxxvbiYlK90ujTih8MjNbCFzut_xms1PY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MUy8k6aiut4UIc6DDWCE7kCf5draGXU1OZJ4_xXYQfo.jpg?width=108&crop=smart&auto=webp&s=6a1052cd2689c85f60b90d2140d31a582e4a4a20', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MUy8k6aiut4UIc6DDWCE7kCf5draGXU1OZJ4_xXYQfo.jpg?width=216&crop=smart&auto=webp&s=6e4d7d2ea07b71f37f7eb2eb53452c80fa177501', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/MUy8k6aiut4UIc6DDWCE7kCf5draGXU1OZJ4_xXYQfo.jpg?auto=webp&s=cde8d5d72dbc03b8cf2ffd02e94137a7eda79342', 'width': 300}, 'variants': {}}]}
|
4090 48 GB bandwidth speed?
| 0 |
Curious why someone would go to all the work of putting a 4090 chip on a 3090 board if the bandwith is 930gb/s vs getting a 5090 32gb with 1.7tb/s. Do they slap on gddr7 chips to get it faster? Because if it doesnt i dont see how it would scale anywhere near as well as buying multiple 5090's especially if the prompt processing on the 5090 is also much faster as is the pcie gen for running in parallel for training.
| 2025-04-28T15:54:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9yypl/4090_48_gb_bandwidth_speed/
|
Nice_Grapefruit_7850
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9yypl
| false | null |
t3_1k9yypl
|
/r/LocalLLaMA/comments/1k9yypl/4090_48_gb_bandwidth_speed/
| false | false |
self
| 0 | null |
ONNX Model Explorer and Visualization Tool
| 10 |
I built a web-app that lets you browse, search, and visualize neural networks directly in your browser. I hope it can be a useful tool for anyone who is studying machine learning! I also published the entire dataset of graphs in case you'd like to use them in your own projects.
Lastly, I just wanted to say a massive thank you to Lutz Roeder, the creator of Netron, which powers the neural network visualizer panel!
Links:
\- Dataset: [https://huggingface.co/datasets/onnx-community/model-explorer](https://huggingface.co/datasets/onnx-community/model-explorer)
\- Source code: [https://github.com/xenova/model-explorer](https://github.com/xenova/model-explorer)
\- Demo: [https://huggingface.co/spaces/onnx-community/model-explorer](https://huggingface.co/spaces/onnx-community/model-explorer)
| 2025-04-28T16:13:45 |
https://v.redd.it/psegvzyinlxe1
|
xenovatech
|
/r/LocalLLaMA/comments/1k9zgvm/onnx_model_explorer_and_visualization_tool/
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zgvm
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/psegvzyinlxe1/DASHPlaylist.mpd?a=1748579735%2CMzVkNjkyM2Y3MDMxNWJiZGM0NTUyNDY2MGQ4MTJkY2FmOTI1ZTY2OWRiMjdiYTlhZjhmODg0MjYzZTFmMzkyOQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/psegvzyinlxe1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/psegvzyinlxe1/HLSPlaylist.m3u8?a=1748579735%2CNjk2MDhhMWQxNjMzYzYwOGQ5ZTdhZGMwMWVjZThkMzY5NTdhMGZjZTBiYzRmMzQ5MTUyYzE0MzNjOGJlMzJjYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/psegvzyinlxe1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1740}}
|
t3_1k9zgvm
|
/r/LocalLLaMA/comments/1k9zgvm/onnx_model_explorer_and_visualization_tool/
| false | false | 10 |
{'enabled': False, 'images': [{'id': 'MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=108&crop=smart&format=pjpg&auto=webp&s=d55b661f27208d0a26653706c1b53e5d582f1b3a', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=216&crop=smart&format=pjpg&auto=webp&s=d8446abcf243745b6748d5d77d84bda4546f7965', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=320&crop=smart&format=pjpg&auto=webp&s=5a6b1df0759d7ad0d70b8baa05d1afcf7a132a98', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=640&crop=smart&format=pjpg&auto=webp&s=c15114c8e08eadc5521ff04a62e958d8eb67043a', 'width': 640}, {'height': 596, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=960&crop=smart&format=pjpg&auto=webp&s=7b8b8ff2e00eb4d6bbec5356853e9af945157b19', 'width': 960}, {'height': 670, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3d2ae3a41cf926fc1b4a3f1e94700f7ca1539d2c', 'width': 1080}], 'source': {'height': 1602, 'url': 'https://external-preview.redd.it/MTZ0OWZ5eWlubHhlMarIRRPtg2s7O4AJ0e98mm3bd1PpjhzMKwn55UgJ4d05.png?format=pjpg&auto=webp&s=823ebbaefe225bbf95e591ebf25c3286a721f450', 'width': 2580}, 'variants': {}}]}
|
|
QWEN 3 0.6 B is a REASONING MODEL
| 287 | 2025-04-28T16:14:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zhrl/qwen_3_06_b_is_a_reasoning_model/
|
josho2001
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zhrl
| false | null |
t3_1k9zhrl
|
/r/LocalLLaMA/comments/1k9zhrl/qwen_3_06_b_is_a_reasoning_model/
| false | false | 287 | null |
||
Agents can now subscribe to any MCP tool
| 1 |
[removed]
| 2025-04-28T16:15:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zits/agents_can_now_subscribe_to_any_mcp_tool/
|
sillogisticphact
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zits
| false | null |
t3_1k9zits
|
/r/LocalLLaMA/comments/1k9zits/agents_can_now_subscribe_to_any_mcp_tool/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3Ydx-U4BWT2gMoEupYEAa8QkLiJ-Q_XndvgcLKnnKrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=108&crop=smart&auto=webp&s=5ce11aaa87a9189a0df2f5fc8afea05353c06cea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=216&crop=smart&auto=webp&s=6eff08d3133c341f541fd592803e41b7f1dddcee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=320&crop=smart&auto=webp&s=5672e74fe6c13ee659a6af757f3dafd7e69c135f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=640&crop=smart&auto=webp&s=5f3d1e71db9527416c70f9f95c4450e2eb5a64c2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=960&crop=smart&auto=webp&s=15430b9220f693ea54500d09b97508ad7b27e10a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?width=1080&crop=smart&auto=webp&s=e65ccd4b5229214a7f473132e110bff3d0fb972a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0ZGEsu5IV8_MYVeZ7moWjzZnr0eBZdTsQdVBN0sSlig.jpg?auto=webp&s=acb58f73394752f0009fc0a3ccd625666942dc7a', 'width': 1200}, 'variants': {}}]}
|
|
Why qwen 3 not drop now?
| 0 |
When?
| 2025-04-28T16:17:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zk96/why_qwen_3_not_drop_now/
|
PumpkinNarrow6339
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zk96
| false | null |
t3_1k9zk96
|
/r/LocalLLaMA/comments/1k9zk96/why_qwen_3_not_drop_now/
| false | false |
self
| 0 | null |
Agents can now subscribe to any MCP tool
| 2 |
Long running agents need subscriptions. An email comes in, that triggers an agent to reply. A website changes that triggers your agent to buy or execute a trade on your behalf. A 500 error in a log is pushed to an agent working on a bug, helping reproduce and push up a PR.
\`mcp-subscribe\` is a composable MCP Server that automatically exposes tools from any MCP Server as a subscript-able Resource. This makes it easy to subscribe your agent to the changing outputs of **any** MCP tool.
The resource URL looks as follows:
tool://<tool\_name>/?<tool\_argument\_name>=<tool\_argument\_value>...
This example would subscribe your agent (mcp-client) to changes on the front page of hacker news:
https://preview.redd.it/hxjmxr2jplxe1.png?width=1469&format=png&auto=webp&s=575b7d1f18e68c084fbe7c504ae5e558554fba7c
To configure \`mcp-subscribe\` pass the base mcp and it's arguments as arguments to \`mcp\_subscribe\`. All existing functionality is forwarded to the base MCP and the new subscript-able resources are added dynamically.
https://preview.redd.it/j7is9u5kplxe1.png?width=1200&format=png&auto=webp&s=fc1ae288497ec7730bb89b308ef9d88079896186
Finally, if you just want it to work based on config, define your yaml and run \`uvx agentd config.yaml\`
https://preview.redd.it/ztk5kppmplxe1.png?width=1866&format=png&auto=webp&s=ca233ad964fd8fd8229b9a09f822c2635adfd27a
| 2025-04-28T16:19:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zm04/agents_can_now_subscribe_to_any_mcp_tool/
|
sillogisticphact
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zm04
| false | null |
t3_1k9zm04
|
/r/LocalLLaMA/comments/1k9zm04/agents_can_now_subscribe_to_any_mcp_tool/
| false | false | 2 | null |
|
New to running local LLM - looking for help why Continue (VSCode) extension causes ollama to freeze
| 0 |
I have an old Mac Mini Core i5 / 16GB ram.
When I ssh, I am able to run ollama on smaller models with ease.:
\`\`\`
% ollama run tinyllama
\>>> hello, can you tell me how to make a guessing game in Python?
Sure! Here's an example of a simple guessing game using the random module in Python:
\`\`\`python
import random
def generate\_guess():
\# Prompt the user for their guess.
guess = input("Guess a number between 1 and 10 (or 'exit' to quit): ")
...
\`\`\`
It goes on. And it is really awesome to be able to run something like this locally!
OK, here is the problem. I would like to use this with VSCode using the Continue extension (don't care if some other extension is better for this, but I have read that Continue should work). I am connecting to the ollama instance on the same local network.
This is my config:
{
"tabAutocompleteModel": {
"apiBase": "http://192.168.0.248:11434/",
"title": "Starcoder2 3b",
"provider": "ollama",
"model": "starcoder2:3b"
},
"models": [
{
"apiBase": "http://192.168.0.248:11434/",
"model": "tinyllama",
"provider": "ollama",
"title": "Tiny Llama"
}
]
}
If I use "Continue Chat" and even try to send a small message like "hello", it does not respond and all of the CPUs on the Mac Mini go to 100%
https://preview.redd.it/urtj2jtsqlxe1.png?width=2836&format=png&auto=webp&s=96265c0762bd830a256c13b835f7bc710f5d1afa
If I look in \`\~/.ollama/history\` nothing is logged.
When I eventually kill the ollama process on the Mac Mini, the VSCode session, the Continue prompt will show an error (so I can confirm that it is reaching the service, since it does respond to the service being shut down).
I am very new to all of this and not sure what to check next. But, I would really like for this to all work.
I am looking for help as a local llm noob. Thanks!
| 2025-04-28T16:28:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zu3k/new_to_running_local_llm_looking_for_help_why/
|
ZestycloseLie6060
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zu3k
| false | null |
t3_1k9zu3k
|
/r/LocalLLaMA/comments/1k9zu3k/new_to_running_local_llm_looking_for_help_why/
| false | false | 0 | null |
|
Is it possible to run any model with these specs?
| 0 |
I looking for wizard viccuna uncensored in the future paired with RTX3080 or whatever else with 10-12GB + 32gb. But for now I wonder if I can even run anything with this:
* Ryzen 5 4600g 2GB Vram
* 12GB DDR4 3200mhz
* 7200rpm HD
* 20GB? PageFile
I'm aware AMD sucks for this, but some even managed to with common GPUs like RX580 so... Is there a model that I could try just for test?
| 2025-04-28T16:33:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1k9zxyd/is_it_possible_to_run_any_model_with_these_specs/
|
WEREWOLF_BX13
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1k9zxyd
| false | null |
t3_1k9zxyd
|
/r/LocalLLaMA/comments/1k9zxyd/is_it_possible_to_run_any_model_with_these_specs/
| false | false |
self
| 0 | null |
Qwen-3: The Real Upgrade We’ve Been Waiting For? 💡
| 1 |
[removed]
| 2025-04-28T16:37:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka01dg/qwen3_the_real_upgrade_weve_been_waiting_for/
|
PumpkinNarrow6339
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka01dg
| false | null |
t3_1ka01dg
|
/r/LocalLLaMA/comments/1ka01dg/qwen3_the_real_upgrade_weve_been_waiting_for/
| false | false |
self
| 1 | null |
CTP + SFT here gives you the Almighty function-caller
| 0 |
How would you like to build smart GenAi infrastructure ?
Give extensive tools memory to your edge agentic system,
And optimize the resources it takes to run yet a high-performance set of agents ?
We came up with a novel approach to function-calling at scale for smart companies and corporate-grade [use-cases.Read](http://use-cases.read/) our full-fledged blog article on this **here on Hugging Face** [https://huggingface.co/blog/Aurelien-Morgan/the-almighty-function-caller](https://huggingface.co/blog/Aurelien-Morgan/the-almighty-function-caller)
It's intended to be accessible to most, with a skippable intro if you're familiar with the basics.
Topics covered of course are Function-Calling but also Continued pretraining, Supervised finetuning of expert adapter, perf' metric, serving on a multi-LoRa endpoint, and so much more !
Come say hi !
| 2025-04-28T16:52:45 |
Aurelien-Morgan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0ehk
| false | null |
t3_1ka0ehk
|
/r/LocalLLaMA/comments/1ka0ehk/ctp_sft_here_gives_you_the_almighty_functioncaller/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '0Pznt5YDOpVzl0jplcjMnMVNvzaKFACS7wlp2wMtST4', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=108&crop=smart&format=png8&s=18ba0b0ae876f7b1a00a3f92e4e82e61d1d5047d', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=216&crop=smart&format=png8&s=9b44abb6bb0e297be8289dc1580730cc52c52e6a', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=320&crop=smart&format=png8&s=97897a91b0619925f1605c87b00c3fa373794649', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=640&crop=smart&format=png8&s=1ac81f79d84430bac4f17545cf4610c743c48c83', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?format=png8&s=390a0ccc50fc589d5b526da6630479d8cf3cb298', 'width': 896}, 'variants': {'gif': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=108&crop=smart&s=49131293eb326cdd192cd0ff49e3591625a642ef', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=216&crop=smart&s=a6d91fa99d5545588b07f15cfc229983997de8c6', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=320&crop=smart&s=511db7cb78bcef0f28a6611257c80d0d9dcd981a', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=640&crop=smart&s=2dd1b1d15b660c9ed49d2909e0da4e632d2139f3', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?s=f373c8b053ca9e14936c93807171c8b12d5cbc44', 'width': 896}}, 'mp4': {'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=108&format=mp4&s=ffe2b504632a68605e43c03c49c98cd3cb42bfd6', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=216&format=mp4&s=051e92113b437932c54eb31b4af7c6bd6061b071', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=320&format=mp4&s=f5af831bfe75af679e752a3e66358823f6cb1185', 'width': 320}, {'height': 411, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?width=640&format=mp4&s=4cb3084bbe5d6f8eb583ef34359b31299ba17b96', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/dqvg1rzkvlxe1.gif?format=mp4&s=90b7605a4eefbad0396928a8c2bdeb40e1a64450', 'width': 896}}}}]}
|
||
Easy Guide to Building Your First AI Agent in Python + Google ADK + Gemini!
| 1 | 2025-04-28T16:55:24 |
https://youtu.be/yVIWyKJPTKo
|
Kind-Industry-609
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0gp7
| false |
{'oembed': {'author_name': 'proflead', 'author_url': 'https://www.youtube.com/@proflead', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/yVIWyKJPTKo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI Agent Explained: How to Build AI Agent with Google ADK"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/yVIWyKJPTKo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AI Agent Explained: How to Build AI Agent with Google ADK', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1ka0gp7
|
/r/LocalLLaMA/comments/1ka0gp7/easy_guide_to_building_your_first_ai_agent_in/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3dQXysWUT6ty_ZBVDgwZF3hkxj3bmDFI_wwpx1pn1io', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/SgCuAEKV6veGAglIoabierRlUs9sPmAuxfJAG1HLgvU.jpg?width=108&crop=smart&auto=webp&s=b907b1a2ebba9bc83deffec2af923da639487b9b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/SgCuAEKV6veGAglIoabierRlUs9sPmAuxfJAG1HLgvU.jpg?width=216&crop=smart&auto=webp&s=c70bce77cf685c82f68435bc821bd2bd0e815cd9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/SgCuAEKV6veGAglIoabierRlUs9sPmAuxfJAG1HLgvU.jpg?width=320&crop=smart&auto=webp&s=1a9de1f4ffc029c015ec3160ab5e47e3506dde90', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/SgCuAEKV6veGAglIoabierRlUs9sPmAuxfJAG1HLgvU.jpg?auto=webp&s=4f5d623e0130edb045a5eca4234b48382f8e0089', 'width': 480}, 'variants': {}}]}
|
||
Coding - RAG - M4 max
| 0 |
Hi all, thinking to pull the trigger and get a new m4 max to do code and try to run local llm with quite a lot documents (but nothing astronomicaly big)
I’d like to know if someone arround is using it and if 64 gb would be enough to run good versions of models or the new qwen3?
128 gb ram is too expensive for my budget and I don’t feel to try to build a new pc and find a decent priced 4090 or 5090.
Ty all!
| 2025-04-28T17:00:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka0l85/coding_rag_m4_max/
|
OboKaman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0l85
| false | null |
t3_1ka0l85
|
/r/LocalLLaMA/comments/1ka0l85/coding_rag_m4_max/
| false | false |
self
| 0 | null |
might've missed it but...no "pan & scan" in llama-cpp for gemma models?
| 3 |
Can't seem to find support for it, or if it is enabled by default. Would anyone know for sure? Thanks
| 2025-04-28T17:10:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka0umm/mightve_missed_it_butno_pan_scan_in_llamacpp_for/
|
OmarBessa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0umm
| false | null |
t3_1ka0umm
|
/r/LocalLLaMA/comments/1ka0umm/mightve_missed_it_butno_pan_scan_in_llamacpp_for/
| false | false |
self
| 3 | null |
QWEN 3 LEAKED + GGUFS (HuggingFace Link)
| 0 |
I started downloading immediately once I found out, but I **BELIEVE** these are the real qwen 3 files here (I have not verified the claim that these are in fact qwen 3, but I believe they are; I am not the repository owner):
[https://huggingface.co/second-state/Qwen3-0.6B-GGUF](https://huggingface.co/second-state/Qwen3-0.6B-GGUF)
[https://huggingface.co/second-state/Qwen3-4B-GGUF](https://huggingface.co/second-state/Qwen3-4B-GGUF)
[https://huggingface.co/second-state/Qwen3-8B-GGUF](https://huggingface.co/second-state/Qwen3-8B-GGUF)
[https://huggingface.co/second-state/Qwen3-32B-GGUF](https://huggingface.co/second-state/Qwen3-32B-GGUF)
| 2025-04-28T17:13:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka0wp7/qwen_3_leaked_ggufs_huggingface_link/
|
offlinesir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0wp7
| false | null |
t3_1ka0wp7
|
/r/LocalLLaMA/comments/1ka0wp7/qwen_3_leaked_ggufs_huggingface_link/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '9ZO7Jm1bP1ONaRqLizCW-GRkx2JB-AqeOwRVcTMdc5U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=108&crop=smart&auto=webp&s=dc80039539f78e23cfb259c10f0ccdf8fe666262', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=216&crop=smart&auto=webp&s=d87cb885c4a4bfd5c3245bdb7724ca91f97cbe78', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=320&crop=smart&auto=webp&s=7e406f8b08304bf0185573a4bc262189f3218a5c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=640&crop=smart&auto=webp&s=8ba3b040616d3625cec84502df6fbc4c754bcd4e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=960&crop=smart&auto=webp&s=638bc8fe17a15e5dff0c9965bef165c8d77378ce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?width=1080&crop=smart&auto=webp&s=9e090dba40d2bfb3b6a5acddfeabeff6985499b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0tXuJ6_5FiQoKCtr31j1RzX2JCMgHKjkP5BhrtNUnr0.jpg?auto=webp&s=49f0ab50d1b4a6184d8332a8e6c6167fc8ded231', 'width': 1200}, 'variants': {}}]}
|
Qwen 3 8B Q8 running 50+tok/s on 4090 laptop, 40K unquanted context
| 35 | 2025-04-28T17:13:47 |
poli-cya
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0xa2
| false | null |
t3_1ka0xa2
|
/r/LocalLLaMA/comments/1ka0xa2/qwen_3_8b_q8_running_50toks_on_4090_laptop_40k/
| false | false | 35 |
{'enabled': True, 'images': [{'id': '7dZCC0lk7XB98ld5XAZkVJe9yOBqwo9-KRwsTQy4e2M', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=108&crop=smart&auto=webp&s=ea1030a3a869a7905f9fd16bbabc045279be4665', 'width': 108}, {'height': 35, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=216&crop=smart&auto=webp&s=42c213820bbc32e4ef2f18b18984041f051ee03e', 'width': 216}, {'height': 51, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=320&crop=smart&auto=webp&s=2c29ab02473f62aad7f7bbcae4350aaef25f4913', 'width': 320}, {'height': 103, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=640&crop=smart&auto=webp&s=282ab76f262f12ab67dea3155bbe95ef042385a0', 'width': 640}, {'height': 155, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=960&crop=smart&auto=webp&s=6e6370cb2d284a769189a0776af6840ce6b54641', 'width': 960}, {'height': 175, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?width=1080&crop=smart&auto=webp&s=c51a16c0dd69da29f90b5cf437616509c2d334fd', 'width': 1080}], 'source': {'height': 184, 'url': 'https://preview.redd.it/x74zdyrswlxe1.png?auto=webp&s=f05534763940248762ec9b68b331aeb2bf69fd65', 'width': 1135}, 'variants': {}}]}
|
|||
Fine-tuning reasoning models without messing up their reasoning?
| 13 |
With the upcoming qwen-3 models seeming to all be reasoning models (even the super small ones at 0.6B), I've been thinking about how you could fine-tune them if you only have supervised data.
You could fine-tune them with GRPO, but that would basically overwrite the RL-based reasoning they got from Qwen, and you'd also have to come up with reward functions, which is usually pretty tricky and finnicky.
An alternative idea I had:
Use Unsloth’s `train_on_response_only()` method, but mask out the internal reasoning tokens (like everything inside `<reasoning>` tags). That way, you only calculate the training loss on the final output, and the model’s reasoning steps stay untouched.
Would love to hear thoughts. Does this seem like a good approach?
| 2025-04-28T17:16:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka0zov/finetuning_reasoning_models_without_messing_up/
|
No-Bicycle-132
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka0zov
| false | null |
t3_1ka0zov
|
/r/LocalLLaMA/comments/1ka0zov/finetuning_reasoning_models_without_messing_up/
| false | false |
self
| 13 | null |
Unsloth's Qwen 3 collection has 58 items. All still hidden.
| 253 |
I guess that this includes different repos for quants that will be available on day 1 once it's official?
| 2025-04-28T17:17:17 |
Cool-Chemical-5629
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka10er
| false | null |
t3_1ka10er
|
/r/LocalLLaMA/comments/1ka10er/unsloths_qwen_3_collection_has_58_items_all_still/
| false | false | 253 |
{'enabled': True, 'images': [{'id': 'Es3oaTi65Nl89xRXZlZdmw8e8RoRme-5OimmPPJo_ls', 'resolutions': [{'height': 14, 'url': 'https://preview.redd.it/pv8uhn7mzlxe1.png?width=108&crop=smart&auto=webp&s=6d89d621afbf6ae8fbdc2bbfddd08784eae8961b', 'width': 108}, {'height': 29, 'url': 'https://preview.redd.it/pv8uhn7mzlxe1.png?width=216&crop=smart&auto=webp&s=bf8006cd29ffe760d07c8f75ffca111ab02437ff', 'width': 216}, {'height': 43, 'url': 'https://preview.redd.it/pv8uhn7mzlxe1.png?width=320&crop=smart&auto=webp&s=289e72259f7d983bb47ce013468846451fb18794', 'width': 320}, {'height': 86, 'url': 'https://preview.redd.it/pv8uhn7mzlxe1.png?width=640&crop=smart&auto=webp&s=48744ff79b663fa07474da8e4cd0c02fb5714e23', 'width': 640}], 'source': {'height': 112, 'url': 'https://preview.redd.it/pv8uhn7mzlxe1.png?auto=webp&s=5e269538fe6086df46d930546a569c9ddd0e8fa3', 'width': 829}, 'variants': {}}]}
|
||
Inference provider for base models
| 1 |
[removed]
| 2025-04-28T17:37:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka1j1g/inference_provider_for_base_models/
|
Objective-Professor3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka1j1g
| false | null |
t3_1ka1j1g
|
/r/LocalLLaMA/comments/1ka1j1g/inference_provider_for_base_models/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ffNXCUPQerMMTV5UAIgJRS5QMtKWEhNQFfpmL7I4Bcc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=108&crop=smart&auto=webp&s=fa74f814d5c43d0d9d47c3591a9d667818ebe0c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=216&crop=smart&auto=webp&s=e3494c6906d2c95f78811be98ecf631cdeb08c13', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=320&crop=smart&auto=webp&s=08f0479f19185f357e3bccc42a42f10f6fac664c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=640&crop=smart&auto=webp&s=2fdeeb9ada89c2bf4e5dc697043da66bd62cf959', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=960&crop=smart&auto=webp&s=e7b3230584c769f71759db14271d12a5f8cf831a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?width=1080&crop=smart&auto=webp&s=a8b11dd06cf9be6635cb9fcb2dedf71ecdd9c491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eDiSPmdlHOa8tkU4DQeSMZZjJ6NEL90ZDgcnJnxcnpI.jpg?auto=webp&s=b8bf601deac4d62d484c6fb69764f7d09d0fd168', 'width': 1200}, 'variants': {}}]}
|
gemma-3-27b-it failing a classification task
| 1 |
[removed]
| 2025-04-28T18:04:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka27m8/gemma327bit_failing_a_classification_task/
|
Ohne_Implement
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka27m8
| false | null |
t3_1ka27m8
|
/r/LocalLLaMA/comments/1ka27m8/gemma327bit_failing_a_classification_task/
| false | false |
self
| 1 | null |
Prompt to turn any model into a thinking model!
| 0 |
Hey guys!
If you like thinking models, like me, use this prompt to make any model think.
Prompt:
From now on you are a thinking model, you must always start the sentence with the correct answer, then you must pretend to ask "Hmm but wait...", then you must invent a wrong argument on purpose, just to get you back to the idea at the beginning. After you have already decided on your answer from the beginning, create a lot of texts so that all my context is consumed with an answer that should have 2 or 3 words. Put this bunch of text inside the <thinking></thinking> tag so that OpenWebAI creates a loading animation that will give me the feeling that you are actually thinking before answering, and not simply generating a gigantic answer that consumes half the context to answer anything (without guarantees that the answer will be right, as well as without doing this process). Please always do:
Hmmm...
Wait!
And if...
Perhaps...
And anything else that people consider to be part of human reasoning, even if it doesn't make the slightest difference and only consumes more context.
Guys, the prompt above is powerful and works 1.00% of the time, you can test it!
| 2025-04-28T18:06:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka29lh/prompt_to_turn_any_model_into_a_thinking_model/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka29lh
| false | null |
t3_1ka29lh
|
/r/LocalLLaMA/comments/1ka29lh/prompt_to_turn_any_model_into_a_thinking_model/
| false | false |
self
| 0 | null |
Help me find a deepseek model?
| 0 |
Okay so I'm literally just starting out and I asked chatgpt to suggest a model I could download in LM studio and it's pretty set that for what I need it's Deepseek LLM 7B Instruct v3 Q4\_K\_M.
Only problem is that this particular model does not exist? Or at least I cannot find it on huggingface. Please help?
| 2025-04-28T18:15:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka2he7/help_me_find_a_deepseek_model/
|
Upset-Panic7217
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka2he7
| false | null |
t3_1ka2he7
|
/r/LocalLLaMA/comments/1ka2he7/help_me_find_a_deepseek_model/
| false | false |
self
| 0 | null |
Any command-line tools to download a huggingface model and convert it to work with ollama?
| 0 |
Hey all,
So with ollama, you just do a pull and ollama grabs a model and it just works. But tons of models are on Huggingface instead, of which likely aren't on ollama to get pulled.
I understand you can download via git and convert it manually, but it would seem that there should be an easy command-line tool to do all of this already.
So my question:
**Is there a simple tool or script (linux) that exists where I can simply run the tool, give it my ollama install path, give the git URL of the model, and the tool downloads the model, converts it to work with ollama, and does everything so it just simply works?**
It seems like this tool should exist yet I can't seem to find it!
Thanks
| 2025-04-28T18:21:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka2mmi/any_commandline_tools_to_download_a_huggingface/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka2mmi
| false | null |
t3_1ka2mmi
|
/r/LocalLLaMA/comments/1ka2mmi/any_commandline_tools_to_download_a_huggingface/
| false | false |
self
| 0 | null |
What went wrong? Mistral-Small GGUF responds with a huge text about Wordpress when I say "hello".
| 0 |
What went wrong?
https://i.imgur.com/zkkXgmB.png
I just downloaded this: https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF (Q3_K_XL).
I said "hello" and the response was what was shown above. New /chat so no prior context.
Any idea on just what the heck is happening?
| 2025-04-28T18:47:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka391z/what_went_wrong_mistralsmall_gguf_responds_with_a/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka391z
| false | null |
t3_1ka391z
|
/r/LocalLLaMA/comments/1ka391z/what_went_wrong_mistralsmall_gguf_responds_with_a/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'RlJhj5p0JpNME102E3Du1esiZcfcCHwUeIaEQEtFrwU', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=108&crop=smart&auto=webp&s=0577ef0ae258e04ff7535941a67daff65bb3ea77', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=216&crop=smart&auto=webp&s=3c5fcb001bd11ebfc4c448ec55a3e23ae4d1f595', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=320&crop=smart&auto=webp&s=f07f18af77cfcf27b2af1db4438d38586c343b8a', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=640&crop=smart&auto=webp&s=8e9978aa80b34c64917235feda22826f4b3874ac', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=960&crop=smart&auto=webp&s=3243ec423127f1ed9885b08deac42fe05153da4e', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?width=1080&crop=smart&auto=webp&s=60eee8eb51e208c3da9a4639ca7920500379599b', 'width': 1080}], 'source': {'height': 1127, 'url': 'https://external-preview.redd.it/MYT19xSoMGpwrtSx50wuUjUvKUJHBQC7MZhOIBSyiOw.png?auto=webp&s=b2b2c77c456c721b67c72592f013e3ddf9461988', 'width': 1830}, 'variants': {}}]}
|
Looks like China is the one playing 5D chess
| 57 |
Don't want to get political here but Qwen 3 release on the same day as LlamaCon. That sounds like a well thought out move.
| 2025-04-28T18:56:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka3hlm/looks_like_china_is_the_one_playing_5d_chess/
|
ahstanin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka3hlm
| false | null |
t3_1ka3hlm
|
/r/LocalLLaMA/comments/1ka3hlm/looks_like_china_is_the_one_playing_5d_chess/
| false | false |
self
| 57 | null |
Running LLMs locally with 5060s
| 3 |
Hello, working in a team that needs to run LLMs locally for confidentiality and security reasons, I'm looking into hardware.
I've seen that 5060s with 16gb VRAM aren't very expensive, so I'm wondering if they're suitable for this kind of thing, and if there are motherboards that let you use 3 or 4 of them at the same time.
The point of using 5060s would be to have a setup for a few thousand dollars.
I'm not too familiar with the hardware for this kind of thing, do you think it's enough or do you have any other suggestions?
Translated with DeepL.com (free version)
| 2025-04-28T19:00:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka3kpr/running_llms_locally_with_5060s/
|
EstebanbanC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka3kpr
| false | null |
t3_1ka3kpr
|
/r/LocalLLaMA/comments/1ka3kpr/running_llms_locally_with_5060s/
| false | false |
self
| 3 | null |
TIL with LLM - Seeking Advice
| 0 |
Hi, I use GPT daily to teach myself things - not trivia but concepts, systems, builds, deployments across a spectrum of domains - one that's big is coding in various languages.
GPT likes to fuck with me a bunch (losing context, not fully listening to instruction, making bad assumptions independently) and I'm tired of it - the random limits and varying ability to manage contextualized discussions.
I want to know, in order to replace GPT in my life and workflow, what sort of hardware am I looking at? I'm not afraid to spend, but I also dont want to overspend for my specific use case. I am not afraid of having to configure over time or learn a new thing - so really it comes down to a financial decision.
From reading, I am thinking of looking for an M2 Max Studio with 128GB of memory, with the goal of using ~70B models. Are my expectations on target? Thanks.
| 2025-04-28T19:21:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka431w/til_with_llm_seeking_advice/
|
roadwaywarrior
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka431w
| false | null |
t3_1ka431w
|
/r/LocalLLaMA/comments/1ka431w/til_with_llm_seeking_advice/
| false | false |
self
| 0 | null |
Nvidia's rumored RTX 5080 Super could feature 24GB of VRAM
| 8 | 2025-04-28T19:26:46 |
https://www.techradar.com/computing/gpu/nvidias-rumored-rtx-5080-super-could-feature-24gb-of-vram-could-it-be-enough-to-match-the-rtx-4090s-performance
|
Ok-Cucumber-7217
|
techradar.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka47lc
| false | null |
t3_1ka47lc
|
/r/LocalLLaMA/comments/1ka47lc/nvidias_rumored_rtx_5080_super_could_feature_24gb/
| false | false | 8 |
{'enabled': False, 'images': [{'id': 'tMqAuFoog9sFwIRWkHWgVmgnLhT_Xs3Tl7WD2mFlHbs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=108&crop=smart&auto=webp&s=6ef37abf4e6066184f2901d871bfb151281d474f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=216&crop=smart&auto=webp&s=ac685ac8abd1758d8e6628175556684c3f78b86c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=320&crop=smart&auto=webp&s=21300683981f5b5a9cc0505eb1e59dd146b1d589', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=640&crop=smart&auto=webp&s=abe17af339280cf1ce1e367878f8e93d09e3122a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=960&crop=smart&auto=webp&s=f4ffc6c40cf9363cc1d6e3b225bddfb701cdcfd6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?width=1080&crop=smart&auto=webp&s=0490e5b596b00e4479caa669de458877a80183f5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/tzMDC4tpeNj-GolqjlKasuW3JZ-_vnpoO0NP5rsV0GU.jpg?auto=webp&s=4131b6583836ce9e8680f21801244f2db2936db3', 'width': 1200}, 'variants': {}}]}
|
||
Local LLM for SOAP
| 1 |
[removed]
| 2025-04-28T19:29:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka49zk/local_llm_for_soap/
|
AgitatedPower802
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka49zk
| false | null |
t3_1ka49zk
|
/r/LocalLLaMA/comments/1ka49zk/local_llm_for_soap/
| false | false |
self
| 1 | null |
xcode/instruments debug/profiling in llama.cpp
| 1 |
[removed]
| 2025-04-28T19:34:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka4ebi/xcodeinstruments_debugprofiling_in_llamacpp/
|
Spiritual-Fly-9943
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka4ebi
| false | null |
t3_1ka4ebi
|
/r/LocalLLaMA/comments/1ka4ebi/xcodeinstruments_debugprofiling_in_llamacpp/
| false | false | 1 | null |
|
Inference providers that host base models
| 5 |
I can't seem to find anything on here specifically on this so thought I would ask, anyone know of any good inference providers that cost base models specifically? Hugging face surprisingly doesn't huggingface nor does together.ai. The only site I've found is hyperbolic but I'm hoping to find others. Any ideas?
| 2025-04-28T19:34:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka4ef0/inference_providers_that_host_base_models/
|
Objective-Professor3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka4ef0
| false | null |
t3_1ka4ef0
|
/r/LocalLLaMA/comments/1ka4ef0/inference_providers_that_host_base_models/
| false | false |
self
| 5 | null |
Best configuration to XTTS webui?
| 1 | 2025-04-28T19:52:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka4u0t/best_configuration_to_xtts_webui/
|
ledener
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka4u0t
| false | null |
t3_1ka4u0t
|
/r/LocalLLaMA/comments/1ka4u0t/best_configuration_to_xtts_webui/
| false | false | 1 | null |
||
Qwen3 Github Repo is up
| 436 |
[https://github.com/QwenLM/qwen3](https://github.com/QwenLM/qwen3)
| 2025-04-28T20:32:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka5t8z/qwen3_github_repo_is_up/
|
Predatedtomcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka5t8z
| false | null |
t3_1ka5t8z
|
/r/LocalLLaMA/comments/1ka5t8z/qwen3_github_repo_is_up/
| false | false |
self
| 436 |
{'enabled': False, 'images': [{'id': 'ewbDK9ZHguSQlCQEpExtmrKaglvePN6JnbrH5BYjrow', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=108&crop=smart&auto=webp&s=21391ef61c5c3d02f1bb48460d688188effaa915', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=216&crop=smart&auto=webp&s=286ae6670a4864a12d61066dc443d959f585774d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=320&crop=smart&auto=webp&s=a52e01f5828855ee3f351211d67f9bbc18163539', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=640&crop=smart&auto=webp&s=f339b0d7b3064a8bf41595bcd7facf3ff5052f0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=960&crop=smart&auto=webp&s=8c418ddc4926475c00cd09d988b16e9ea2c8df73', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?width=1080&crop=smart&auto=webp&s=b28fc85b7cdbcdaf8e1c52d0701848db7c081515', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ka60fzOLdvla9yxOe02RTWxWwrirdL2GjeuDliLbAzY.jpg?auto=webp&s=e9bd43f9a72ef1b0536b63ee8e488e0d400323ad', 'width': 1200}, 'variants': {}}]}
|
Qwen3: Think Deeper, Act Faster
| 90 | 2025-04-28T20:44:41 |
https://qwenlm.github.io/blog/qwen3/
|
a_slay_nub
|
qwenlm.github.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka638t
| false | null |
t3_1ka638t
|
/r/LocalLLaMA/comments/1ka638t/qwen3_think_deeper_act_faster/
| false | false |
default
| 90 | null |
|
Qwen3: Think Deeper, Act Faster
| 3 | 2025-04-28T20:44:44 |
https://qwen.readthedocs.io/en/latest/
|
ShreckAndDonkey123
|
qwen.readthedocs.io
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka63a4
| false | null |
t3_1ka63a4
|
/r/LocalLLaMA/comments/1ka63a4/qwen3_think_deeper_act_faster/
| false | false |
default
| 3 | null |
|
Community deep research outputs
| 1 |
[removed]
| 2025-04-28T20:48:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka66mz/community_deep_research_outputs/
|
klawisnotwashed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka66mz
| false | null |
t3_1ka66mz
|
/r/LocalLLaMA/comments/1ka66mz/community_deep_research_outputs/
| false | false |
self
| 1 | null |
Qwen3 Benchmark Results
| 206 | 2025-04-28T20:48:54 |
https://www.reddit.com/gallery/1ka66y0
|
No_Weather8173
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka66y0
| false | null |
t3_1ka66y0
|
/r/LocalLLaMA/comments/1ka66y0/qwen3_benchmark_results/
| false | false | 206 | null |
||
Here's how to turn off "thinking" in Qwen 3: add "/no_think" to your prompt or system message.
| 66 |
Source: https://x.com/OrganicGPT/status/1916956574112772490
| 2025-04-28T20:50:01 |
nderstand2grow
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka67wo
| false | null |
t3_1ka67wo
|
/r/LocalLLaMA/comments/1ka67wo/heres_how_to_turn_off_thinking_in_qwen_3_add_no/
| false | false | 66 |
{'enabled': True, 'images': [{'id': 'uNcvUTlMum2mWUdEgUVNWFeinWgE_BB2kRHGs74Kn2A', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=108&crop=smart&auto=webp&s=96fadf1db8fd19755c562585c854d8799227d55b', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=216&crop=smart&auto=webp&s=ae559c693b096eda1da2c2fb026d9a45e79157df', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=320&crop=smart&auto=webp&s=1021038f5fd3ed4b4d469048ac3cc312e6d5401f', 'width': 320}, {'height': 284, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=640&crop=smart&auto=webp&s=cf2c8a8224908b1aec50820f161ae02dc7d83606', 'width': 640}, {'height': 426, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=960&crop=smart&auto=webp&s=a43383ff3eab9f2e3e2605bb44b5fca63aca29f1', 'width': 960}, {'height': 479, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?width=1080&crop=smart&auto=webp&s=01a1d8e2c35350af5345dfcc18da9f275f46a536', 'width': 1080}], 'source': {'height': 522, 'url': 'https://preview.redd.it/07172gvu1nxe1.png?auto=webp&s=547cf6b1b74f547021b00481edeb79ad90395683', 'width': 1176}, 'variants': {}}]}
|
||
Qwen3 Benchmarks
| 46 |
[Qwen3: Think Deeper, Act Faster | Qwen](https://qwenlm.github.io/blog/qwen3/)
| 2025-04-28T20:51:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka68yy/qwen3_benchmarks/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka68yy
| false | null |
t3_1ka68yy
|
/r/LocalLLaMA/comments/1ka68yy/qwen3_benchmarks/
| false | false |
self
| 46 | null |
https://qwenlm.github.io/blog/qwen3/
| 18 |
Qwen 3 blog is up
| 2025-04-28T20:52:22 |
https://www.reddit.com/gallery/1ka69xf
|
dinesh2609
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka69xf
| false | null |
t3_1ka69xf
|
/r/LocalLLaMA/comments/1ka69xf/httpsqwenlmgithubioblogqwen3/
| false | false | 18 | null |
|
Qwen3 technical report are here !
| 43 |
Today, we are excited to announce the release of **Qwen3**, the latest addition to the Qwen family of large language models. Our flagship model, **Qwen3-235B-A22B**, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc., when compared to other top-tier models such as DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro. Additionally, the small MoE model, **Qwen3-30B-A3B**, outcompetes QwQ-32B with 10 times of activated parameters, and even a tiny model like Qwen3-4B can rival the performance of Qwen2.5-72B-Instruct.
Blog link: [https://qwenlm.github.io/blog/qwen3/](https://qwenlm.github.io/blog/qwen3/)
| 2025-04-28T20:52:57 |
Dr_Karminski
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6ae2
| false | null |
t3_1ka6ae2
|
/r/LocalLLaMA/comments/1ka6ae2/qwen3_technical_report_are_here/
| false | false |
default
| 43 |
{'enabled': True, 'images': [{'id': '2ej2eigc2nxe1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=108&crop=smart&auto=webp&s=f77853320ca58e34573fa098a915a5297e6990e1', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=216&crop=smart&auto=webp&s=efb907ec79f9a5f42c0a59e6924269f217bfcd92', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=320&crop=smart&auto=webp&s=ebae6f83fa83d2eb28f261e0e143b81c3a05509f', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=640&crop=smart&auto=webp&s=45d7638c4534df009db1ee1b9802c643d1d8fbc5', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=960&crop=smart&auto=webp&s=4137343fc01f3f07803c000febd6611023238216', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?width=1080&crop=smart&auto=webp&s=f6500ed046e75eb95cefefe8c836623ed3f77e18', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/2ej2eigc2nxe1.jpeg?auto=webp&s=be5a0df04e5a781ec3a5d8ae0ac29afec0b55610', 'width': 3413}, 'variants': {}}]}
|
|
Qwen 3 4B is on par with Qwen 2.5 72B instruct
| 93 |
[Source: https:\/\/qwenlm.github.io\/blog\/qwen3\/](https://preview.redd.it/hjcy793l2nxe1.png?width=1080&format=png&auto=webp&s=e10a9c0e2e022cba6582547efb31a27017a76b17)
This is insane if true. Excited to test it out.
| 2025-04-28T20:53:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka6b02/qwen_3_4b_is_on_par_with_qwen_25_72b_instruct/
|
numinouslymusing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6b02
| false | null |
t3_1ka6b02
|
/r/LocalLLaMA/comments/1ka6b02/qwen_3_4b_is_on_par_with_qwen_25_72b_instruct/
| false | false | 93 | null |
|
Qwen 3 MoE making Llama 4 Maverick obsolete... 😱
| 408 | 2025-04-28T20:53:59 |
Cool-Chemical-5629
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6b9p
| false | null |
t3_1ka6b9p
|
/r/LocalLLaMA/comments/1ka6b9p/qwen_3_moe_making_llama_4_maverick_obsolete/
| false | false | 408 |
{'enabled': True, 'images': [{'id': 'hsZTzKMmF5EMVqNe1cF_MGa1pXCMGYRAk0JndlVJ7Jo', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=108&crop=smart&auto=webp&s=e11ca9b3f6b8944e10b558f98ecc24959c714172', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=216&crop=smart&auto=webp&s=147aa456eec2dee9c9046586b5726c3bf17fe392', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=320&crop=smart&auto=webp&s=6d0226057e783839b5e872d1082ad79a53afb452', 'width': 320}, {'height': 435, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=640&crop=smart&auto=webp&s=6dfa045be761753915c1f77e27b33367ce3b36c5', 'width': 640}, {'height': 653, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=960&crop=smart&auto=webp&s=e7f91dde3eba615dd4d65f22fcc6a9cee0c621cd', 'width': 960}, {'height': 735, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?width=1080&crop=smart&auto=webp&s=762b8982e39598e1c76eb92b105d15bcce39ca37', 'width': 1080}], 'source': {'height': 1058, 'url': 'https://preview.redd.it/szckfh6i2nxe1.jpeg?auto=webp&s=c05b88d60c1d4c0e14808f5d850b54ec9f32ce0e', 'width': 1554}, 'variants': {}}]}
|
|||
Opensource
| 1 | 2025-04-28T20:54:31 |
Linkpharm2
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6bpp
| false | null |
t3_1ka6bpp
|
/r/LocalLLaMA/comments/1ka6bpp/opensource/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'o9RfVG7KD-5eW4EbBgQoUQU0LRYERUDNJ2rL2eDyew0', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=108&crop=smart&auto=webp&s=f6260562236641f5fa36e6c4718fa1a9d3eb5951', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=216&crop=smart&auto=webp&s=f7ba6d69147705d60f668933f588a59607ea0a50', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=320&crop=smart&auto=webp&s=1d3891118416dc0e30f458a37fb2a4892d5d3678', 'width': 320}, {'height': 435, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=640&crop=smart&auto=webp&s=d6ad9d5555b0c66f620ccc02fcbc0ea90564345e', 'width': 640}, {'height': 653, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=960&crop=smart&auto=webp&s=20f7b7cc9375f713c7e632516ae487d08b3112e7', 'width': 960}, {'height': 735, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?width=1080&crop=smart&auto=webp&s=8cfe2e3c0b4d2a09ad3d165c79c355452aaf6dc4', 'width': 1080}], 'source': {'height': 1058, 'url': 'https://preview.redd.it/v4ihjzkq2nxe1.jpeg?auto=webp&s=d30fc86e93b8b911848604fdef511f412374e171', 'width': 1554}, 'variants': {}}]}
|
|||
Strix Halo LocalLLM Test (Compared to M4 Pro and Strix Point as well)
| 1 |
[removed]
| 2025-04-28T20:59:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka6g8y/strix_halo_localllm_test_compared_to_m4_pro_and/
|
Noble00_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6g8y
| false | null |
t3_1ka6g8y
|
/r/LocalLLaMA/comments/1ka6g8y/strix_halo_localllm_test_compared_to_m4_pro_and/
| false | false | 1 | null |
|
Qwen3 is live on chat.qwen.ai
| 21 |
They seem to have added 235B MoE and 32B dense in the model list
https://chat.qwen.ai/
| 2025-04-28T21:05:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka6kui/qwen3_is_live_on_chatqwenai/
|
ahmetegesel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6kui
| false | null |
t3_1ka6kui
|
/r/LocalLLaMA/comments/1ka6kui/qwen3_is_live_on_chatqwenai/
| false | false |
self
| 21 | null |
Qwen 3 !!!
| 1,768 |
Introducing Qwen3!
We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc., when compared to other top-tier models such as DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro. Additionally, the small MoE model, Qwen3-30B-A3B, outcompetes QwQ-32B with 10 times of activated parameters, and even a tiny model like Qwen3-4B can rival the performance of Qwen2.5-72B-Instruct.
For more information, feel free to try them out in Qwen Chat Web (chat.qwen.ai) and APP and visit our GitHub, HF, ModelScope, etc.
| 2025-04-28T21:07:01 |
https://www.reddit.com/gallery/1ka6mic
|
ResearchCrafty1804
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6mic
| false | null |
t3_1ka6mic
|
/r/LocalLLaMA/comments/1ka6mic/qwen_3/
| false | false |
default
| 1,768 | null |
Qwen3 weights released
| 27 |
Qwen3 weights released
https://preview.redd.it/6ife2le15nxe1.png?width=1122&format=png&auto=webp&s=164641e993d1f235efb48fbcfac34fbf99a08a8d
| 2025-04-28T21:07:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka6n0t/qwen3_weights_released/
|
Acrobatic_Donkey5089
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6n0t
| false | null |
t3_1ka6n0t
|
/r/LocalLLaMA/comments/1ka6n0t/qwen3_weights_released/
| false | false | 27 | null |
|
Qwen3-235B-A22B has been released
| 28 | 2025-04-28T21:11:15 |
https://huggingface.co/Qwen/Qwen3-235B-A22B
|
paf1138
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6q3u
| false | null |
t3_1ka6q3u
|
/r/LocalLLaMA/comments/1ka6q3u/qwen3235ba22b_has_been_released/
| false | false | 28 |
{'enabled': False, 'images': [{'id': '2CPXSIzkp22xYPsTVpgsp4OcDlEliyzHHGoKpPeBFBs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=108&crop=smart&auto=webp&s=89957ddc3e0ceb4136c276d3d85968c69109c147', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=216&crop=smart&auto=webp&s=e4a44b08f3f4a0dced4c58658a931f1249f694f2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=320&crop=smart&auto=webp&s=a3de8cd502780f769e1d3da532275ec4fd60c53f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=640&crop=smart&auto=webp&s=4f7cd146bad9c79b3890a9abf11391b109bd9776', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=960&crop=smart&auto=webp&s=602260ad00758e767180485deb7d9ee48f77343e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?width=1080&crop=smart&auto=webp&s=de3b7dea2a2861846122e38b8c470914a9487010', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9XgXXrBl6uCwW0eE-n9K4sfzB5YuRIuJJ-W66FyTl_w.jpg?auto=webp&s=cc4bd6610df3c9a4f902269a1812e41e7888b950', 'width': 1200}, 'variants': {}}]}
|
||
Qwen3 - a unsloth Collection
| 103 |
Unsloth GGUFs for Qwen 3 models are up!
| 2025-04-28T21:12:32 |
https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95
|
FullstackSensei
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6r7a
| false | null |
t3_1ka6r7a
|
/r/LocalLLaMA/comments/1ka6r7a/qwen3_a_unsloth_collection/
| false | false | 103 |
{'enabled': False, 'images': [{'id': '2lz_qGGjfD-nlwelnEoB1bBHEPimnjl1z0Xos5GcDfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=108&crop=smart&auto=webp&s=af993fcf24cb0799edc12ca383281d92a0651071', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=216&crop=smart&auto=webp&s=c493b37cee91798b159cabb5318295c4eb3c5d1a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=320&crop=smart&auto=webp&s=03e9e2107ddf9c58a39f327c1e5c9cb000f0c499', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=640&crop=smart&auto=webp&s=2b478b284df11d21c9e69bb850d00bf3cf95d9d4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=960&crop=smart&auto=webp&s=6d3d72dd5dc71d501713b1b6e4bc9eca49a1761f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?width=1080&crop=smart&auto=webp&s=7a08b1463aed51272d9df658866073e919caa047', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3P4HwI5x2HyPU9Mu2Vb_-7vgmEa7LlQZpRYlpMm54cw.jpg?auto=webp&s=73125b6a6bb70fe92aac98cca7351bf850777bf1', 'width': 1200}, 'variants': {}}]}
|
|
ollama run qwen3
| 8 |
ollama is up as well [https://ollama.com/library/qwen3](https://ollama.com/library/qwen3)
| 2025-04-28T21:17:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka6vmm/ollama_run_qwen3/
|
Predatedtomcat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka6vmm
| false | null |
t3_1ka6vmm
|
/r/LocalLLaMA/comments/1ka6vmm/ollama_run_qwen3/
| false | false |
self
| 8 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
I benchmarked engagement statistics with Qwen 3 and was not disappointed
| 47 | 2025-04-28T21:31:28 |
atineiatte
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka77cb
| false | null |
t3_1ka77cb
|
/r/LocalLLaMA/comments/1ka77cb/i_benchmarked_engagement_statistics_with_qwen_3/
| false | false | 47 |
{'enabled': True, 'images': [{'id': 'jWh1f8RxLS-lc_NvN6EQuoYH2OmWk-IsMK_zfU84JPE', 'resolutions': [{'height': 201, 'url': 'https://external-preview.redd.it/OXAkotcL8N1TsPd1rmumn8Ci7jgYfaegC6hPZ4rIiAc.png?width=108&crop=smart&auto=webp&s=90ec7dd324a51f4b25138888fd9822d9d55d59a1', 'width': 108}, {'height': 403, 'url': 'https://external-preview.redd.it/OXAkotcL8N1TsPd1rmumn8Ci7jgYfaegC6hPZ4rIiAc.png?width=216&crop=smart&auto=webp&s=ae52b10db1e61b8f0c6bacf3f257e0d057ce4d96', 'width': 216}, {'height': 597, 'url': 'https://external-preview.redd.it/OXAkotcL8N1TsPd1rmumn8Ci7jgYfaegC6hPZ4rIiAc.png?width=320&crop=smart&auto=webp&s=15c5dece9d5cddd5788899922fa4f15d5e8a16c9', 'width': 320}, {'height': 1195, 'url': 'https://external-preview.redd.it/OXAkotcL8N1TsPd1rmumn8Ci7jgYfaegC6hPZ4rIiAc.png?width=640&crop=smart&auto=webp&s=7a376c2c1de26c1478ac4b533347ab3f5bf7b7a2', 'width': 640}], 'source': {'height': 1422, 'url': 'https://external-preview.redd.it/OXAkotcL8N1TsPd1rmumn8Ci7jgYfaegC6hPZ4rIiAc.png?auto=webp&s=d8bdbc746b193848c6fd9cf88d12d6be2d781949', 'width': 761}, 'variants': {}}]}
|
|||
Qwen3-14b-Q8 GGUF Available
| 8 |
I had it generated on HF with ggml-org/gguf-my-repo, and it can be found here:
[OMP123/Qwen3-14B-Q8\_0-GGUF · Hugging Face](https://huggingface.co/OMP123/Qwen3-14B-Q8_0-GGUF)
Enjoy!
| 2025-04-28T21:33:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka78y8/qwen314bq8_gguf_available/
|
Renegad_Hipster
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka78y8
| false | null |
t3_1ka78y8
|
/r/LocalLLaMA/comments/1ka78y8/qwen314bq8_gguf_available/
| false | false |
self
| 8 | null |
Damn qwen cooked it
| 60 | 2025-04-28T21:36:46 |
Independent-Wind4462
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka7bv4
| false | null |
t3_1ka7bv4
|
/r/LocalLLaMA/comments/1ka7bv4/damn_qwen_cooked_it/
| false | false | 60 |
{'enabled': True, 'images': [{'id': 'eIw676X5z4dzsTelZqakqvfNsUDEbNb4OgJ5WQRL3z4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=108&crop=smart&auto=webp&s=c383636033e6fc7ff3c1203e62fff953bab710d5', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=216&crop=smart&auto=webp&s=d6a5933c235f312119f12fa51e4edef08b48cc76', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=320&crop=smart&auto=webp&s=7218fda25c7d442a91f51d9cb346ed85a238c153', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=640&crop=smart&auto=webp&s=1790672bb3ea1403e9a7b9ec02f3b843d3e618bc', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=960&crop=smart&auto=webp&s=a6f4a2e7e4cbf678d9319dfc07d7e2c7195bde8f', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?width=1080&crop=smart&auto=webp&s=1b921ce630d4fa9bafbeb5e7ad43a00a2843b53a', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/rrw7nwdbanxe1.jpeg?auto=webp&s=6a87af7b15082c64177e54842c77e5701d70c833', 'width': 3413}, 'variants': {}}]}
|
|||
Qwen3 is finally out
| 33 |
https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f
| 2025-04-28T21:39:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka7dxq/qwen3_is_finally_out/
|
SaynedBread
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka7dxq
| false | null |
t3_1ka7dxq
|
/r/LocalLLaMA/comments/1ka7dxq/qwen3_is_finally_out/
| false | false |
self
| 33 |
{'enabled': False, 'images': [{'id': 'p9slym96XJRWmY2VToV2jeDgIfabgqAU84ODfNODQjc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=108&crop=smart&auto=webp&s=d73092a54cf06b1f6d038d648cb879e836a3add9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=216&crop=smart&auto=webp&s=5116f2f2ad9e803b19c39adb721805b751a4a3ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=320&crop=smart&auto=webp&s=ba372af0279a0303f3ffca4831c185cad033e443', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=640&crop=smart&auto=webp&s=2ec0104497cd658c6e8f53c36520d3da04cf7ffd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=960&crop=smart&auto=webp&s=aa874741d27187591828a8eb5f6eb18a80c91db7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?width=1080&crop=smart&auto=webp&s=8e0c4c4202a192d1e8eb6506f601862e4ba9a336', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ciGZQXAmZI-n2jL2SRH2Wnlieo-l7MaimpZUuGU0of4.jpg?auto=webp&s=73a66bbc2ce7c99d947bfad7070b134bfbbf6973', 'width': 1200}, 'variants': {}}]}
|
No benchmarks or details on the performance of 0.6B qwen?🧐
| 7 |
In case i missed it, can someone please link to any details on that model?
Also, any opinions on it are also appreciated.
| 2025-04-28T21:43:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1ka7h9k/no_benchmarks_or_details_on_the_performance_of/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ka7h9k
| false | null |
t3_1ka7h9k
|
/r/LocalLLaMA/comments/1ka7h9k/no_benchmarks_or_details_on_the_performance_of/
| false | false |
self
| 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.