title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Coming Soon: VLLM-Swap (Host multiple models through one OpenAI endpoint!)
| 1 |
[removed]
| 2025-06-12T05:31:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9eino/coming_soon_vllmswap_host_multiple_models_through/
|
maxwell321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9eino
| false | null |
t3_1l9eino
|
/r/LocalLLaMA/comments/1l9eino/coming_soon_vllmswap_host_multiple_models_through/
| false | false | 1 | null |
|
Can I run DeepSeek R1 and DeepSeek V3 simultaneously on the same server?
| 1 |
[removed]
| 2025-06-12T05:39:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9enik/can_i_run_deepseek_r1_and_deepseek_v3/
|
cdani2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9enik
| false | null |
t3_1l9enik
|
/r/LocalLLaMA/comments/1l9enik/can_i_run_deepseek_r1_and_deepseek_v3/
| false | false |
self
| 1 | null |
OpenAI delays their open source model claiming to add "something amazing" to it
| 390 | 2025-06-12T06:26:23 |
https://techcrunch.com/2025/06/10/openais-open-model-is-delayed
|
umarmnaq
|
techcrunch.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9fec7
| false | null |
t3_1l9fec7
|
/r/LocalLLaMA/comments/1l9fec7/openai_delays_their_open_source_model_claiming_to/
| false | false | 390 |
{'enabled': False, 'images': [{'id': 'jMPcLFMxZ5DT_xqC0adhGOivsZE10inqbTpX_tOyzrU', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=108&crop=smart&auto=webp&s=77583f6f2972372f4a626c2024cb822dd5c846d9', 'width': 108}, {'height': 165, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=216&crop=smart&auto=webp&s=67e257e9a0b20007df56d95c6d4726fbe6af4f3f', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=320&crop=smart&auto=webp&s=c0f40cf5b3a61edb05f3fc34a32c2c3c408b3cb1', 'width': 320}, {'height': 489, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=640&crop=smart&auto=webp&s=35f17ffcf6a5b86edfcd5643785889b2dd77fb23', 'width': 640}, {'height': 733, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=960&crop=smart&auto=webp&s=4d2eb29345390f0d486f4606c681d81abf8b6837', 'width': 960}, {'height': 825, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?width=1080&crop=smart&auto=webp&s=2e64f5e112b998deb08d5786a0bb146ee3d40861', 'width': 1080}], 'source': {'height': 917, 'url': 'https://external-preview.redd.it/R_3_vozrpyNLk-RuPK719qfww-pDxCPb4GbyGYYEwIQ.jpg?auto=webp&s=9c52fe2024731378319aaffdce359f835eaf27a6', 'width': 1200}, 'variants': {}}]}
|
||
RAG for code: best current solutions?
| 16 |
Hi. Given a code repository, I want to generate embeddings I can use for RAG. What are the best solutions for this nowadays? I'd consider both open-source options I can run locally (if the accuracy is good) and APIs if the costs are reasonable.
Any help would be appreciated, I am very new to all of this, not sure where to look either for resources either.
| 2025-06-12T06:37:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9fki4/rag_for_code_best_current_solutions/
|
vlatkosh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9fki4
| false | null |
t3_1l9fki4
|
/r/LocalLLaMA/comments/1l9fki4/rag_for_code_best_current_solutions/
| false | false |
self
| 16 | null |
Rope and temp scaling along the current used context size?
| 1 |
[removed]
| 2025-06-12T06:56:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9fuhs/rope_and_temp_scaling_along_the_current_used/
|
SiEgE-F1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9fuhs
| false | null |
t3_1l9fuhs
|
/r/LocalLLaMA/comments/1l9fuhs/rope_and_temp_scaling_along_the_current_used/
| false | false |
self
| 1 | null |
Real-time voicechat bot on Discord Channels
| 1 |
[removed]
| 2025-06-12T07:20:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9g7vc/realtime_voicechat_bot_on_discord_channels/
|
Dry-Entrepreneur179
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9g7vc
| false | null |
t3_1l9g7vc
|
/r/LocalLLaMA/comments/1l9g7vc/realtime_voicechat_bot_on_discord_channels/
| false | false |
self
| 1 | null |
Parameter-free loading UI for LLAMA.CPP models for novice users
| 1 |
[removed]
| 2025-06-12T07:24:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9g9yq/parameterfree_loading_ui_for_llamacpp_models_for/
|
Big-Employer9324
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9g9yq
| false | null |
t3_1l9g9yq
|
/r/LocalLLaMA/comments/1l9g9yq/parameterfree_loading_ui_for_llamacpp_models_for/
| false | false | 1 | null |
|
What do i need to run a big deepseek r1 model locally on gpus?
| 1 |
[removed]
| 2025-06-12T07:28:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9gbyc/what_do_i_need_to_run_a_big_deepseek_r1_model/
|
Drasek666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9gbyc
| false | null |
t3_1l9gbyc
|
/r/LocalLLaMA/comments/1l9gbyc/what_do_i_need_to_run_a_big_deepseek_r1_model/
| false | false |
self
| 1 | null |
Are we going the wrong way with LLM development?
| 1 |
[removed]
| 2025-06-12T07:34:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9gfe4/are_we_going_the_wrong_way_with_llm_development/
|
Acrobatic_Plate9537
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9gfe4
| false | null |
t3_1l9gfe4
|
/r/LocalLLaMA/comments/1l9gfe4/are_we_going_the_wrong_way_with_llm_development/
| false | false |
self
| 1 | null |
What happened to Yi?
| 113 |
[Yi](https://huggingface.co/01-ai) had some of the best local models in the past, but this year there haven't been any news about them. Does anyone know what happened?
| 2025-06-12T07:38:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9ghjc/what_happened_to_yi/
|
undefdev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ghjc
| false | null |
t3_1l9ghjc
|
/r/LocalLLaMA/comments/1l9ghjc/what_happened_to_yi/
| false | false |
self
| 113 |
{'enabled': False, 'images': [{'id': 'hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=108&crop=smart&auto=webp&s=09976e12870c91a13b5ab4ea6f395f7f8a573b8b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=216&crop=smart&auto=webp&s=f5d894a805070f929845d4271cd3f6ade996acbc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=320&crop=smart&auto=webp&s=34a3083387b9fcb73c1a9b7eaafe1e558ef45a45', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=640&crop=smart&auto=webp&s=3910a1a1c3bdd40841214cda661c98a8bb59681a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=960&crop=smart&auto=webp&s=50c6302720c079676447a25a4569c3f44eb2368f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?width=1080&crop=smart&auto=webp&s=ff59c1d1658aafa82d2114b52282c84b287fc542', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hK1D7x-srfRjWqUX3jBnBT5mOIRIBBx74XIU3cvjaCI.png?auto=webp&s=66f4c527046d3d203cd866bf4a5ef48db0346d6b', 'width': 1200}, 'variants': {}}]}
|
Are we going the wrong way with LLM development?
| 1 |
[removed]
| 2025-06-12T07:39:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9ghox/are_we_going_the_wrong_way_with_llm_development/
|
Acrobatic_Plate9537
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ghox
| false | null |
t3_1l9ghox
|
/r/LocalLLaMA/comments/1l9ghox/are_we_going_the_wrong_way_with_llm_development/
| false | false |
self
| 1 | null |
Are we going the wrong way with LLM development?
| 1 |
[removed]
| 2025-06-12T07:40:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9gidm/are_we_going_the_wrong_way_with_llm_development/
|
Acrobatic_Plate9537
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9gidm
| false | null |
t3_1l9gidm
|
/r/LocalLLaMA/comments/1l9gidm/are_we_going_the_wrong_way_with_llm_development/
| false | false | 1 | null |
|
Are we going the wrong way with LLM development?
| 1 |
[removed]
| 2025-06-12T07:47:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9gm8b/are_we_going_the_wrong_way_with_llm_development/
|
Acrobatic_Plate9537
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9gm8b
| false | null |
t3_1l9gm8b
|
/r/LocalLLaMA/comments/1l9gm8b/are_we_going_the_wrong_way_with_llm_development/
| false | false |
self
| 1 | null |
Guide: Install llama.cpp with rocm support on opensuse tumbleweed
| 1 |
[removed]
| 2025-06-12T09:13:00 |
https://dev.to/rohan-sircar/unlocking-the-power-of-llms-on-opensuse-with-amd-a-step-by-step-guide-to-installing-rocm-and-1doe
|
rohan-sircar
|
dev.to
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9huh8
| false | null |
t3_1l9huh8
|
/r/LocalLLaMA/comments/1l9huh8/guide_install_llamacpp_with_rocm_support_on/
| false | false |
default
| 1 | null |
Google and Microsoft vs OpenAI and Anthropic, a fun visualization of their open releases on Hugging Face in the past year (Julien Chaumond on LinkedIn)
| 560 | 2025-06-12T09:21:44 |
Nunki08
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9hzb5
| false | null |
t3_1l9hzb5
|
/r/LocalLLaMA/comments/1l9hzb5/google_and_microsoft_vs_openai_and_anthropic_a/
| false | false | 560 |
{'enabled': True, 'images': [{'id': '-x-YTBzQZWflgljvfYv30yAbEg1nc9bfcBK-y3rEmSQ', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=108&crop=smart&auto=webp&s=650928a703743319320792a48a96201cbfdd01fe', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=216&crop=smart&auto=webp&s=c1926637bd01b2a03e95d6bb684e4b0af9ed08d2', 'width': 216}, {'height': 159, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=320&crop=smart&auto=webp&s=1a9aec844751585bb45d6056396be6a32cb63889', 'width': 320}, {'height': 319, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=640&crop=smart&auto=webp&s=8cc62e350d3b3d31d64be11a8e4372ca1fc5f0e7', 'width': 640}, {'height': 479, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=960&crop=smart&auto=webp&s=880b6e3ac886301260f715bbee745a462af985bb', 'width': 960}, {'height': 539, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?width=1080&crop=smart&auto=webp&s=10fb7ce809afae243745372a3dc4a18ed91c8201', 'width': 1080}], 'source': {'height': 959, 'url': 'https://preview.redd.it/2vdfa3f5sg6f1.jpeg?auto=webp&s=32b63164467343a97ec62da191548aa0cd83011f', 'width': 1919}, 'variants': {}}]}
|
|||
How far are we..from running a veo 3 range model on local
| 1 |
[removed]
| 2025-06-12T10:16:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9iujw/how_far_are_wefrom_running_a_veo_3_range_model_on/
|
maneesh_sandra
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9iujw
| false | null |
t3_1l9iujw
|
/r/LocalLLaMA/comments/1l9iujw/how_far_are_wefrom_running_a_veo_3_range_model_on/
| false | false |
self
| 1 | null |
A new swarm-style distributed pretraining architecture has just launched, working on a 15B model
| 49 |
Macrocosmos has released IOTA, a collaborative distributed pretraining network. Participants contribute compute to collectively pretrain a 15B model. It’s a model and data parallel setup, meaning people can work on disjointed parts of it at the same time.
It’s also been designed with a lower barrier to entry, as nobody needs to have a full local copy of the model saved, making it more cost effective to people with smaller setups. The goal is to see if people can pretrain a model in a decentralized setting, producing SOTA-level benchmarks. It’s a practical investigation into how decentralized and open-source methods can rival centralized LLMs, either now or in the future.
It’s early days (the project came out about 10 days ago) but they’ve already got a decent number of participants. Plus, there’s been a nice drop in loss recently.
They’ve got a [real-time 3D dashboard of the model](https://iota.macrocosmos.ai/dashboard), showing active participants.
They also published their [technical paper about the architecture](https://iota.macrocosmos.ai/research/research.pdf).
| 2025-06-12T11:02:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9jm52/a_new_swarmstyle_distributed_pretraining/
|
emission-control
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9jm52
| false | null |
t3_1l9jm52
|
/r/LocalLLaMA/comments/1l9jm52/a_new_swarmstyle_distributed_pretraining/
| false | false |
self
| 49 | null |
Evaluate and monitor your Hybrid Search RAG | LangGraph, Qdrant miniCOIL, Opik, and DeepSeek-R1
| 1 |
[removed]
| 2025-06-12T11:09:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9jq07/evaluate_and_monitor_your_hybrid_search_rag/
|
External_Ad_11
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9jq07
| false | null |
t3_1l9jq07
|
/r/LocalLLaMA/comments/1l9jq07/evaluate_and_monitor_your_hybrid_search_rag/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8eW8y8XLusqJ8pw7erbgy1SslG9WzT_AqCjymDnthgM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=108&crop=smart&auto=webp&s=c8ef9434e35d345e59f17e841f10433604c86882', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=216&crop=smart&auto=webp&s=aabd3b9b4ef857eb1b5d7c8c45d0134e13a87da9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=320&crop=smart&auto=webp&s=b54b2589103b8a2a5ca100ca878820df97d3764e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=640&crop=smart&auto=webp&s=9f9093a4d2c2a0e4a12c6ec57dfc5db31e8e28a2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=960&crop=smart&auto=webp&s=a68144807e052c1d9799197d4fcc8a22b3534826', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?width=1080&crop=smart&auto=webp&s=b97322ff0b4faa27aafa5f55dd1d0ada1e718925', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/VvA0IQ70_iSMgT9TOk4U3i9Qxi6UD20KAAX2xaB6VoI.jpg?auto=webp&s=9d4d8bef6f3d5a5f781169883b9b12a272992b24', 'width': 1200}, 'variants': {}}]}
|
Dive into Minara's insights - 用typescript帮我写一下mcp client的代码
| 1 | 2025-06-12T11:13:16 |
https://xneuro-app.dev.nftgo.dev/share/chat/684ab5e28441bfd401f1e964
|
LowEntrepreneur7276
|
xneuro-app.dev.nftgo.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9jsgx
| false | null |
t3_1l9jsgx
|
/r/LocalLLaMA/comments/1l9jsgx/dive_into_minaras_insights_用typescript帮我写一下mcp/
| false | false |
default
| 1 | null |
|
Dive into Minara's insights - 用typescript帮我写一下mcp client的代码
| 1 | 2025-06-12T11:18:46 |
https://xneuro-app.dev.nftgo.dev/share/chat/684ab7858441bfd401f1e965
|
LowEntrepreneur7276
|
xneuro-app.dev.nftgo.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9jvxr
| false | null |
t3_1l9jvxr
|
/r/LocalLLaMA/comments/1l9jvxr/dive_into_minaras_insights_用typescript帮我写一下mcp/
| false | false |
default
| 1 | null |
|
does llama.cpp have parallel requests
| 1 |
[removed]
| 2025-06-12T11:26:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9k0zw/does_llamacpp_have_parallel_requests/
|
rithwik3112
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9k0zw
| false | null |
t3_1l9k0zw
|
/r/LocalLLaMA/comments/1l9k0zw/does_llamacpp_have_parallel_requests/
| false | false |
self
| 1 | null |
does llama.cpp have parallel requests
| 1 |
[removed]
| 2025-06-12T11:29:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9k2xv/does_llamacpp_have_parallel_requests/
|
rithwik3112
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9k2xv
| false | null |
t3_1l9k2xv
|
/r/LocalLLaMA/comments/1l9k2xv/does_llamacpp_have_parallel_requests/
| false | false |
self
| 1 | null |
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:34:45 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9k664
| false | null |
t3_1l9k664
|
/r/LocalLLaMA/comments/1l9k664/swerebench_major_update_tool_usage_claude_sonnet/
| false | false |
default
| 1 | null |
||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:38:18 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9k8hu
| false | null |
t3_1l9k8hu
|
/r/LocalLLaMA/comments/1l9k8hu/swerebench_major_update_tool_usage_claude_sonnet/
| false | false |
default
| 1 | null |
||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:43:57 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kc7u
| false | null |
t3_1l9kc7u
|
/r/LocalLLaMA/comments/1l9kc7u/swerebench_major_update_tool_usage_claude_sonnet/
| false | false |
default
| 1 | null |
||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:51:03 |
Fabulous_Pollution10
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kgth
| false | null |
t3_1l9kgth
|
/r/LocalLLaMA/comments/1l9kgth/swerebench_major_update_tool_usage_claude_sonnet/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'cjGIKk2H4FF_dwiLw0oxDvn6vT8MW4ZVAHsk8RLJJdw', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=108&crop=smart&auto=webp&s=a20a33c3ae67b993ea4a5420a63d28ccbd12772c', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=216&crop=smart&auto=webp&s=a558eb9ab760c39b5852d8943fc3365b319780c4', 'width': 216}, {'height': 220, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=320&crop=smart&auto=webp&s=c575cafcdaf9e47785bc18b5fa42238185080001', 'width': 320}, {'height': 441, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=640&crop=smart&auto=webp&s=5e41a52f4f3ab9f7724920f442267386426a675f', 'width': 640}, {'height': 662, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=960&crop=smart&auto=webp&s=850395cb641e810179056597a4dbc61dd806eda5', 'width': 960}, {'height': 745, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?width=1080&crop=smart&auto=webp&s=e195ceb9f03c006dcec7b13898cd003027592126', 'width': 1080}], 'source': {'height': 1390, 'url': 'https://preview.redd.it/3fn3bkufih6f1.png?auto=webp&s=0d50ea5a19d2751c34130f36e9db7dec423ee88a', 'width': 2014}, 'variants': {}}]}
|
||
We updated our benchmark for SWE agents
| 1 |
[removed]
| 2025-06-12T11:54:59 |
Fabulous_Pollution10
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kjb5
| false | null |
t3_1l9kjb5
|
/r/LocalLLaMA/comments/1l9kjb5/we_updated_our_benchmark_for_swe_agents/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'XtNZDYvO_YecbmR1BXXdymNNz2lRGihq_JzW2Sl-6W4', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=108&crop=smart&auto=webp&s=8dd406f9b1d7e7da3f429ee56817fbd7fa9cd0cf', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=216&crop=smart&auto=webp&s=38096ab5ee97a42c943926723873efc73a1a7446', 'width': 216}, {'height': 222, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=320&crop=smart&auto=webp&s=f6f5014ea2068e52036c8d35703464278986a6fe', 'width': 320}, {'height': 444, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=640&crop=smart&auto=webp&s=ceec74e8cd9fb13aa64b5609315d714501ff4650', 'width': 640}, {'height': 666, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=960&crop=smart&auto=webp&s=6bc526a97eba49ff8a0aa01a682301ad37ad3f89', 'width': 960}, {'height': 749, 'url': 'https://preview.redd.it/9z130azajh6f1.png?width=1080&crop=smart&auto=webp&s=58487b7048150a61aa7091c258f1b37f544e5268', 'width': 1080}], 'source': {'height': 1392, 'url': 'https://preview.redd.it/9z130azajh6f1.png?auto=webp&s=0c7b37cc91914a0eb636f5fc7e0aeb332c5737ac', 'width': 2006}, 'variants': {}}]}
|
||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:56:18 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kk6p
| false | null |
t3_1l9kk6p
|
/r/LocalLLaMA/comments/1l9kk6p/swerebench_major_update_tool_usage_claude_sonnet/
| false | false |
default
| 1 | null |
||
A major update for [SWE-rebench]
| 1 |
[removed]
| 2025-06-12T11:57:08 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kkqg
| false | null |
t3_1l9kkqg
|
/r/LocalLLaMA/comments/1l9kkqg/a_major_update_for_swerebench/
| false | false |
default
| 1 | null |
||
SWE-rebench Major Update: Tool Usage, Claude Sonnet 3.5/4, OpenAI o3 and May Data
| 1 |
[removed]
| 2025-06-12T11:57:47 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kl6c
| false | null |
t3_1l9kl6c
|
/r/LocalLLaMA/comments/1l9kl6c/swerebench_major_update_tool_usage_claude_sonnet/
| false | false |
default
| 1 | null |
||
New SWE-LLMs results
| 1 |
[deleted]
| 2025-06-12T12:12:10 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9kvgx
| false | null |
t3_1l9kvgx
|
/r/LocalLLaMA/comments/1l9kvgx/new_swellms_results/
| false | false |
default
| 1 | null |
||
Youtube transcript summarizer ?
| 1 |
[removed]
| 2025-06-12T12:22:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9l310/youtube_transcript_summarizer/
|
_throawayplop_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9l310
| false | null |
t3_1l9l310
|
/r/LocalLLaMA/comments/1l9l310/youtube_transcript_summarizer/
| false | false |
self
| 1 | null |
Petition: Ban 'announcement of announcement' posts
| 803 |
There's no reason to have 5 posts a week about OpenAI announcing that they will release a model then delaying the release date it then announcing it's gonna be *amazing***™** then announcing they will announce a new update in a month ad infinitum. Fuck those grifters.
| 2025-06-12T12:36:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9lddr/petition_ban_announcement_of_announcement_posts/
|
RangaRea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9lddr
| false | null |
t3_1l9lddr
|
/r/LocalLLaMA/comments/1l9lddr/petition_ban_announcement_of_announcement_posts/
| false | false |
self
| 803 | null |
How to Use Intel AI Playground Effectively and Run LLMs Locally (Even Offline)
| 0 | 2025-06-12T12:39:03 |
https://www.digit.in/features/laptops/how-to-use-intel-ai-playground-effectively-and-run-llms-locally-even-offline.html
|
reps_up
|
digit.in
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9lf3t
| false | null |
t3_1l9lf3t
|
/r/LocalLLaMA/comments/1l9lf3t/how_to_use_intel_ai_playground_effectively_and/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'ogL23zxjMMUwgdU0Uv-HEKuk_9SWwWKc6pbNxlVSVR0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=108&crop=smart&auto=webp&s=6f9a3a9b6b6939543251f4e0a3b10b9a71aa87eb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=216&crop=smart&auto=webp&s=95fc308a3a50bf7d8819fbd88379569f12dcd56a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=320&crop=smart&auto=webp&s=99a0d85a4209de229066ca969d38a161882e0296', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=640&crop=smart&auto=webp&s=29815821b73bf192e3f9a2f598893773d30bc428', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=960&crop=smart&auto=webp&s=8de6d3e897e8e2d53cffe6a1eefd172520936300', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?width=1080&crop=smart&auto=webp&s=03b40c9c93ef6476f4fd8bad7656cfd221379620', 'width': 1080}], 'source': {'height': 844, 'url': 'https://external-preview.redd.it/6uoN3INtNDJi2_rugNt6qBdUd0zUpuy9DiQM9z8qpDU.jpg?auto=webp&s=e857f82f259d4ddc68d3df91714836daf8ee7158', 'width': 1500}, 'variants': {}}]}
|
||
Spy search: Open source that faster than perplexity
| 12 |
I am really happy !!! My open source is somehow faster than perplexity yeahhhh so happy. Really really happy and want to share with you guys !! ( :( someone said it's copy paste they just never ever use mistral + 5090 :)))) & of course they don't even look at my open source hahahah )
https://reddit.com/link/1l9m32y/video/bf99fvbmwh6f1/player
url: [https://github.com/JasonHonKL/spy-search](https://github.com/JasonHonKL/spy-search)
| 2025-06-12T13:10:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9m32y/spy_search_open_source_that_faster_than_perplexity/
|
jasonhon2013
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9m32y
| false | null |
t3_1l9m32y
|
/r/LocalLLaMA/comments/1l9m32y/spy_search_open_source_that_faster_than_perplexity/
| false | false | 12 |
{'enabled': False, 'images': [{'id': 'mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=108&crop=smart&auto=webp&s=6fe4b13253cbd4c0952b3138599399866fbd3245', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=216&crop=smart&auto=webp&s=f58290570311b191b7e247dec680a2bc64d661d9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=320&crop=smart&auto=webp&s=589f4bdfc9e3cb82f81c80b52a5d2d57fbdfe3ca', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=640&crop=smart&auto=webp&s=0b8fbdb36c3e6484af6fc5fab2652c43cce55f42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=960&crop=smart&auto=webp&s=85cee70b219a69679659834d4d3dbc7ca6595496', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?width=1080&crop=smart&auto=webp&s=4d7cfb7563a38fdf56c205d68986bc375adda7d1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mF7cGFuCHvTPCF2PosrefDlSXoaQ7svn_kltQIq6Dac.png?auto=webp&s=8d48dfb03e85db34a4757e3b21966c9957f950b5', 'width': 1200}, 'variants': {}}]}
|
|
[Update] Emotionally-Aware VN Dialogue Dataset – Deep Context Tagging, ShareGPT-Style Structure
| 27 |
Hey again everyone,
Following up on my earlier posts about converting a visual novel script into a fine-tuning dataset, I’ve gone back and improved the format significantly thanks to feedback here.
The goal is the same: create expressive, roleplay-friendly dialogue data that captures emotion, tone, character personality, and nuance, especially for dere-type characters and NSFW/SFW variation.
VOl 0 is only SFW
• What’s New:
Improved JSON structure, closer to ShareGPT format
More consistent tone/emotion tagging
Added deeper context awareness (4 lines before/after)
Preserved expressive elements (onomatopoeia, stutters, laughs)
Categorized dere-type and added voice/personality cues
• Why?
Because tagging a line as just “laughing” misses everything. Was it sarcasm? Pain? Joy? I want models to understand motivation and emotional flow — not just parrot words.
Example (same as before to show improvement):
Flat version:
{
"instruction": "What does Maple say?",
"output": "Oopsie! I accidentally splashed some hot water on you! Sorry about that~ Ahahah-- Owwww!!",
"metadata": { "character": "Maple", "emotion": "laughing"
"tone": "apologetic"
}
}
• Updated version with context:
{
"from": "char_metadata",
"value": {
"character_name": "Azuki",
"persona": "Azuki is a fiery, tomboyish...",
"dere_type": "tsundere",
"current_emotion": "mocking, amused, pain",
"tone": "taunting, surprised"
}
},
{
"from": "char",
"value": "You're a NEET catgirl who can only eat, sleep, and play! Huehuehueh, whooaaa!! Aagh, that's hotttt!!!"
},
{
"from": "char_metadata",
"value": {
"character_name": "Maple",
"persona": "Maple is a prideful, sophisticated catgirl...",
"dere_type": "himidere",
"current_emotion": "malicious glee, feigned innocence, pain",
"tone": "sarcastic, surprised"
}
},
{
"from": "char",
"value": "Oopsie! I accidentally splashed some hot water on you! Sorry about that~ Ahahah-- Owwww!!"
},
{
"from": "char_metadata",
"value": {
"character_name": "Azuki",
"persona": "Azuki is a fiery, tomboyish...",
"dere_type": "tsundere",
"current_emotion": "retaliatory, gleeful",
"tone": "sarcastic"
}
},
{
"from": "char",
"value": "Heh, my bad! My paw just flew right at'cha! Hahaha!"
}
• Outcome
This dataset now lets a model:
Match dere-type voices with appropriate phrasing
Preserve emotional realism in both SFW and NSFW contexts
Move beyond basic emotion labels to expressive patterns (tsundere teasing, onomatopoeia, flustered laughter, etc.)
It’s still a work in progress (currently ~3MB, will grow, dialogs only without JSON yet), and more feedback is welcome. Just wanted to share the next step now that the format is finally usable and consistent.
| 2025-06-12T13:14:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9m6dc/update_emotionallyaware_vn_dialogue_dataset_deep/
|
Akowmako
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9m6dc
| false | null |
t3_1l9m6dc
|
/r/LocalLLaMA/comments/1l9m6dc/update_emotionallyaware_vn_dialogue_dataset_deep/
| false | false |
self
| 27 | null |
Locally running, scriptable process to extract Form Data from scanned (!) PDF?
| 1 |
[removed]
| 2025-06-12T13:25:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9meqh/locally_running_scriptable_process_to_extract/
|
cts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9meqh
| false | null |
t3_1l9meqh
|
/r/LocalLLaMA/comments/1l9meqh/locally_running_scriptable_process_to_extract/
| false | false | 1 | null |
|
Open WebUI Bug Reports Immediately Closed by Maintainer
| 0 |
[removed]
| 2025-06-12T13:41:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9msio/open_webui_bug_reports_immediately_closed_by/
|
liquidki
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9msio
| false | null |
t3_1l9msio
|
/r/LocalLLaMA/comments/1l9msio/open_webui_bug_reports_immediately_closed_by/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=108&crop=smart&auto=webp&s=55da3d89a6b0fc42b49a5c73bad04c4c87297391', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=216&crop=smart&auto=webp&s=8a23cbaa10c6329d18d6d188f28ea1f55978c0ef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=320&crop=smart&auto=webp&s=aee4de7a214c59847bfae069118f5683b46865f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=640&crop=smart&auto=webp&s=f661867b4e86aa7371825dd3e10db8a89fb965fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=960&crop=smart&auto=webp&s=f31dddf70d0d16072394d07d7aee3e38b36c397d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?width=1080&crop=smart&auto=webp&s=19a6da73d6b47f6b509a046faee708f06548ff93', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nkqDyTpDorAnmtsr2tluKY_TZm_AfXF403J6OR3EXX8.png?auto=webp&s=c9664eba30969756fd73436fe1c3e219327764d9', 'width': 1200}, 'variants': {}}]}
|
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models
| 39 |
We introduce ABBA, a new architecture for Parameter-Efficient Fine-Tuning (PEFT) that significantly outperforms LoRA and all its major variants across a broad range of benchmarks, all under the same parameter budget.
Most PEFT methods, including LoRA, represent weight updates using a low-rank decomposition added to the frozen model weights. While effective, this structure can limit the expressivity of the update, especially at low rank.
ABBA takes a fundamentally different approach:
We introduce ABBA, a new architecture for Parameter-Efficient Fine-Tuning (PEFT) that significantly outperforms LoRA and all its major variants across a broad range of benchmarks, all under the same parameter budget.
Most PEFT methods, including LoRA, represent weight updates using a low-rank decomposition added to the frozen model weights. While effective, this structure can limit the expressivity of the update, especially at low rank.
ABBA takes a fundamentally different approach:
[ABBA Architecture](https://preview.redd.it/nta9e7md3i6f1.png?width=446&format=png&auto=webp&s=54e090db99fe4694c4b2e9a80778576b0f705169)
* Reparameterizes the update as a Hadamard product of two independently learned low-rank matrices
* Decouples the two components of the update from the base model, allowing them to be optimized freely
* Enables significantly higher expressivity and improved performance under the same parameter budget
📈 Empirical Results
ABBA consistently beats state-of-the-art LoRA-based methods like HiRA, DoRA, and LoRA-Pro across four open-source LLMs: Mistral-7B, Gemma-2 9B, LLaMA-3.2 1B, and LLaMA-3.2 3B, on a suite of commonsense and arithmetic reasoning benchmarks. In several cases, ABBA even outperforms full fine-tuning.
📄 Paper: [https://arxiv.org/abs/2505.14238](https://arxiv.org/abs/2505.14238)
💻 Code: [https://github.com/CERT-Lab/abba](https://github.com/CERT-Lab/abba)
We’d love to hear your thoughts, whether you're working on PEFT methods, fine-tuning, or anything related to making LLMs more adaptable and efficient. We're happy to answer questions, discuss implementation details, or just hear how this fits into your work.
| 2025-06-12T14:01:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9n911/abba_highly_expressive_hadamard_product/
|
AccomplishedCode4689
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9n911
| false | null |
t3_1l9n911
|
/r/LocalLLaMA/comments/1l9n911/abba_highly_expressive_hadamard_product/
| false | false |
self
| 39 | null |
Using LLM's with Home Assistant + Voice Integration
| 8 |
Looking to set up home assistant at home with a LLM connected to make the assistant more conversational. It doesn't need to have superior depth of knowledge, but I am looking for something that can respond creatively, conversationally, dynamically to a variety of requests centered around IoT tasks. In my head this is something like Qwen3 8B or 14B.
Are there any NUCs/MiniPC's that would fit the bill here? Is it often recommended that the LLM be hosted on separate hardware from the Home Assistant server?
In the long term I'd like to explore a larger system to accommodate something more comprehensive for general use, but in the near term I'd like to start playing with this project.
| 2025-06-12T14:07:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9ndp2/using_llms_with_home_assistant_voice_integration/
|
nat2r
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ndp2
| false | null |
t3_1l9ndp2
|
/r/LocalLLaMA/comments/1l9ndp2/using_llms_with_home_assistant_voice_integration/
| false | false |
self
| 8 | null |
Tired of losing great ChatGPT messages and having to scroll back all the way? I built SnapIt, a Chrome extension to instantly save, organize & export them!
| 0 |
I got tired of endlessly scrolling to find back great ChatGPT messages I'd forgotten to save. It drove me crazy so I built something to fix it.
Honestly, I am very surprised how much I ended using it.
It's actually super useful when you are building a project, doing research or coming with a plan because you can save all the different parts that chatgpt sends you and you always have instant access to them.
**SnapIt** is a Chrome extension designed specifically for ChatGPT. You can:
* Instantly save any ChatGPT message in one click.
* Jump directly back to the original message in your chat.
* Copy the message quickly in plain text format.
* Export messages to professional-looking PDFs instantly.
* Organize your saved messages neatly into folders and pinned favorites.
Perfect if you're using ChatGPT for work, school, research, or creative brainstorming.
Would love your feedback or any suggestions you have!
Link to the extension: [https://chromewebstore.google.com/detail/snapit-chatgpt-message-sa/mlfbmcmkefmdhnnkecdoegomcikmbaac](https://chromewebstore.google.com/detail/snapit-chatgpt-message-sa/mlfbmcmkefmdhnnkecdoegomcikmbaac)
| 2025-06-12T14:07:23 |
https://www.reddit.com/gallery/1l9ndww
|
cedparadis
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ndww
| false | null |
t3_1l9ndww
|
/r/LocalLLaMA/comments/1l9ndww/tired_of_losing_great_chatgpt_messages_and_having/
| false | false | 0 | null |
|
Tired of losing great ChatGPT messages and having to scroll back all the way?
| 0 |
I got tired of endlessly scrolling to find back great ChatGPT messages I'd forgotten to save. It drove me crazy so I built something to fix it.
Honestly, I am very surprised how much I ended using it.
It's actually super useful when you are building a project, doing research or coming with a plan because you can save all the different parts that chatgpt sends you and you always have instant access to them.
**SnapIt** is a Chrome extension designed specifically for ChatGPT. You can:
* Instantly save any ChatGPT message in one click.
* Jump directly back to the original message in your chat.
* Copy the message quickly in plain text format.
* Export messages to professional-looking PDFs instantly.
* Organize your saved messages neatly into folders and pinned favorites.
Perfect if you're using ChatGPT for work, school, research, or creative brainstorming.
Would love your feedback or any suggestions you have!
Link to the extension: [https://chromewebstore.google.com/detail/snapit-chatgpt-message-sa/mlfbmcmkefmdhnnkecdoegomcikmbaac](https://chromewebstore.google.com/detail/snapit-chatgpt-message-sa/mlfbmcmkefmdhnnkecdoegomcikmbaac)
| 2025-06-12T14:33:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9o0ct/tired_of_losing_great_chatgpt_messages_and_having/
|
cedparadis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9o0ct
| false | null |
t3_1l9o0ct
|
/r/LocalLLaMA/comments/1l9o0ct/tired_of_losing_great_chatgpt_messages_and_having/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '5HcRO6e7F1edK4P46xwComq2h8cvm5EOFg0431l1Crc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IKNO4ysc9NLlDVA-_8hcNCcyOvqj3ZgUDXMmMRh8Re4.jpg?width=108&crop=smart&auto=webp&s=c42b1b400f46ef4ceaf07a8f05523dcb3ed9a8a2', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/IKNO4ysc9NLlDVA-_8hcNCcyOvqj3ZgUDXMmMRh8Re4.jpg?auto=webp&s=267d71865eae41c585bad8cbe0515a03dce9ff57', 'width': 128}, 'variants': {}}]}
|
[P] Solving Tower of Hanoi for N ≥ 15 with LLMs: It’s Not About Model Size, It’s About Prompt Engineering
| 1 |
[removed]
| 2025-06-12T14:38:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9o54l/p_solving_tower_of_hanoi_for_n_15_with_llms_its/
|
Pale-Entertainer-386
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9o54l
| false | null |
t3_1l9o54l
|
/r/LocalLLaMA/comments/1l9o54l/p_solving_tower_of_hanoi_for_n_15_with_llms_its/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=108&crop=smart&auto=webp&s=71c7c023a9ec57f87927f898daaedbe1dca2b02a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=216&crop=smart&auto=webp&s=2b5c99d6c8d43569dc1c96a48cd18694e12f76e8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=320&crop=smart&auto=webp&s=1d05a837b94633f36ec3e29612a97dde03ccb698', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=640&crop=smart&auto=webp&s=bfc6ba6d645111497729241672f6575f4961c54f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=960&crop=smart&auto=webp&s=2b98de73fb7c45a3fbe662418c38461608a9e55d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=1080&crop=smart&auto=webp&s=a5686a6365e31b98cc645cd29199d28afc1c6ddf', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?auto=webp&s=02fa19c42806060fda1b75c4e4ccf6c9b5fed941', 'width': 1200}, 'variants': {}}]}
|
Transformer Lab Now Supports Diffusion Model Training in Addition to LLM Training
| 82 |
In addition to LLM training and inference, we're excited to have just launched Diffusion Model inference and training. It's all open source! We'd love your feedback and to see what you build.
In the platform we support most major open Diffusion models (including SDXL & Flux). The platform supports inpainting, img2img, and of course LoRA training.
Link to documentation and details here [https://transformerlab.ai/blog/diffusion-support](https://transformerlab.ai/blog/diffusion-support)
| 2025-06-12T15:09:03 |
aliasaria
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ovq7
| false | null |
t3_1l9ovq7
|
/r/LocalLLaMA/comments/1l9ovq7/transformer_lab_now_supports_diffusion_model/
| false | false | 82 |
{'enabled': True, 'images': [{'id': '_xYpsuq7aXBrygYusZ4m8czLrPVdDaO2V3iXEpTmeLw', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=108&crop=smart&auto=webp&s=e07788a20d1b72cb5985f6ce1b8d6a999ca37d15', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=216&crop=smart&auto=webp&s=a6e2bbc28f4a550cd45a8c99a9aa46612ec969fb', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=320&crop=smart&auto=webp&s=972d408a273f0d28dac3e99ce0b17f538e5644a6', 'width': 320}, {'height': 436, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=640&crop=smart&auto=webp&s=4b03743bbc2b1a143d93935a612a5c4512444879', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=960&crop=smart&auto=webp&s=abdacb52cfe18f2a3f92dc9b7fe8e7e5dc979554', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?width=1080&crop=smart&auto=webp&s=ba37ea626d0feb8d3004fa41d56df933e4db5291', 'width': 1080}], 'source': {'height': 1896, 'url': 'https://preview.redd.it/usk33qqlgi6f1.png?auto=webp&s=e1cd6f42b50365203a5cb9e79a5146b34a1b9b1f', 'width': 2778}, 'variants': {}}]}
|
||
[update] Restructured repo under rvn-tools — modular CLI for LLM formats
| 11 |
Quick update.
Yesterday I posted about \`rvn-convert\`, a Rust tool for converting safetensors to GGUF.
While fixing bugs today, I also restructured the project under \`rvn-tools\` — a modular, CLI-oriented Rust-native toolkit for LLM model formats, inference workflows, and data pipelines.
🔧 What's in so far:
\- safetensor -> GGUF converter (initial implementation)
\- CLI layout with \`clap\`, shard parsing, typed metadata handling
\- Makefile-based workflow (fmt, clippy, release, test, etc.)
🎯 Focus:
\- Fully open, minimal, and performant
\- Memory mapped operations, zero copy, zero move
\- Built for \*\*local inference\*\*, not cloud-bloat
\- Python bindings planned via \`pyo3\` (coming soon)
Next steps:
\- tokenizer tooling
\- qkv and other debugging tooling
\- tensor validator / preprocessor
\- some other ideas I go along
Open to feedback or bug reports or ideas.
Repo: \[https://github.com/rvnllm/rvn-tools\](https://github.com/rvnllm/rvn-tools)
| 2025-06-12T15:12:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9oyt7/update_restructured_repo_under_rvntools_modular/
|
rvnllm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9oyt7
| false | null |
t3_1l9oyt7
|
/r/LocalLLaMA/comments/1l9oyt7/update_restructured_repo_under_rvntools_modular/
| false | false |
self
| 11 |
{'enabled': False, 'images': [{'id': 'yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=108&crop=smart&auto=webp&s=dc92a3863a13234a2265d490aa7cf6ee54fd6566', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=216&crop=smart&auto=webp&s=6e3345de09dc01f42a8f0f784a2a7102148a0686', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=320&crop=smart&auto=webp&s=b63b549f1b4373685e737b08bd47d503817ac8e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=640&crop=smart&auto=webp&s=2b98ad9ea3664b0dd39fe34a8c2bba536a5215de', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=960&crop=smart&auto=webp&s=5dd155d5e48fe343a9ca59fd5c84c03d9e29912d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?width=1080&crop=smart&auto=webp&s=fde843eb9363e213b0d523e193a5b048035b98ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yMp2iBPH4K3QakdvNk8EWXidfVsCExG77jPoa1jcn7s.png?auto=webp&s=573b697a5ef0d3a34688b4ffdb3dd47a89c36eaa', 'width': 1200}, 'variants': {}}]}
|
Nanonets-OCR-s: An Open-Source Image-to-Markdown Model with LaTeX, Tables, Signatures, checkboxes & More
| 325 |
We're excited to share **Nanonets-OCR-s**, a powerful and lightweight (3B) VLM model that converts documents into clean, structured **Markdown**. This model is trained to understand document structure and content context (like tables, equations, images, plots, watermarks, checkboxes, etc.).
🔍 **Key Features:**
* **LaTeX Equation Recognition** Converts inline and block-level math into properly formatted LaTeX, distinguishing between `$...$` and `$$...$$`.
* **Image Descriptions for LLMs** Describes embedded images using structured `<img>` tags. Handles logos, charts, plots, and so on.
* **Signature Detection & Isolation** Finds and tags signatures in scanned documents, outputting them in `<signature>` blocks.
* **Watermark Extraction** Extracts watermark text and stores it within `<watermark>` tag for traceability.
* **Smart Checkbox & Radio Button Handling** Converts checkboxes to Unicode symbols like ☑, ☒, and ☐ for reliable parsing in downstream apps.
* **Complex Table Extraction** Handles multi-row/column tables, preserving structure and outputting both **Markdown** and **HTML** formats.
**Huggingface / GitHub / Try it out**:
[Huggingface Model Card](https://huggingface.co/nanonets/Nanonets-OCR-s)
[Read the full announcement](https://nanonets.com/research/nanonets-ocr-s/)
[Try it with Docext in Colab](https://github.com/NanoNets/docext/blob/main/PDF2MD_README.md#quickstart)
[Document with checkbox and radio buttons](https://preview.redd.it/9r53s8oxii6f1.png?width=1762&format=png&auto=webp&s=5d401151504b45c6c7b7aa49342a2a40bff19a3d)
[Document with image](https://preview.redd.it/ky28mxc1ji6f1.jpg?width=3938&format=pjpg&auto=webp&s=0f0ba053a366c0fd0885aea5785b5c040ff590fd)
[Document with equations](https://preview.redd.it/yfrazoi3ji6f1.png?width=3640&format=png&auto=webp&s=4215d426d90f153c5a477f140aa47312e153aab8)
[Document with watermark](https://preview.redd.it/am74wtm5ji6f1.jpg?width=1533&format=pjpg&auto=webp&s=380788a942ee270cf73ddc7e968773948b4f74f0)
[Document with tables](https://preview.redd.it/6g80yoj9ji6f1.png?width=3482&format=png&auto=webp&s=8d6008f9e56e03e7589f38a16ba1917e4e0c419b)
Feel free to try it out and share your feedback.
| 2025-06-12T15:19:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9p54x/nanonetsocrs_an_opensource_imagetomarkdown_model/
|
SouvikMandal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9p54x
| false | null |
t3_1l9p54x
|
/r/LocalLLaMA/comments/1l9p54x/nanonetsocrs_an_opensource_imagetomarkdown_model/
| false | false | 325 |
{'enabled': False, 'images': [{'id': '_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=108&crop=smart&auto=webp&s=cb7942f143c257c4fa7a42d2b63993c819dab9b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=216&crop=smart&auto=webp&s=d585b90b25ab39502f026f190a1538f52ba6bdaa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=320&crop=smart&auto=webp&s=e9bf1d451158e391452460eff9bc0d4895f1fa21', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=640&crop=smart&auto=webp&s=a45bdc43de827d3d2f41c39e7eb1c65d82e73b20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=960&crop=smart&auto=webp&s=2deefd1e5b96cf570789204710587c289e29f7eb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?width=1080&crop=smart&auto=webp&s=6d5cf2095d29b927be01e0375b5b0a1912436009', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_xcgPKgts5yF5jBIuB89LBBM6G1OD9-qimuRMUyq8jY.png?auto=webp&s=f8badf5066057d80f6a5a630949982111543011f', 'width': 1200}, 'variants': {}}]}
|
|
🧙♂️ I Built a Local AI Dungeon Master – Meet Dungeo_ai (Open Source & Powered by your local LLM )
| 50 |
https://reddit.com/link/1l9pwk1/video/u4614vthpi6f1/player
Hey folks!
I’ve been building something I'm super excited to finally share:
🎲 Dungeo\_ai – a fully local, AI-powered Dungeon Master designed for immersive solo RPGs, worldbuilding, and roleplay.
This project it's free and for now it connect to ollama(llm) and alltalktts(tts)
🛠️ What it can do:
💻 Runs entirely locally (with support for Ollama )
🧠 Persists memory, character state, and custom personalities
📜 Simulates D&D-like dialogue and encounters dynamically
🗺️ Expands lore over time with each interaction
🧙 Great for solo campaigns, worldbuilding, or even prototyping NPCs
It’s still early days, but it’s usable and growing. I’d love feedback, collab ideas, or even just to know what kind of characters you’d throw into it.
Here’s the link again:
👉 [https://github.com/Laszlobeer/Dungeo\_ai/tree/main](https://github.com/Laszlobeer/Dungeo_ai/tree/main)
Thanks for checking it out—and if you give it a spin, let me know how your first AI encounter goes. 😄Hey folks!
I’ve been building something I'm super excited to finally share:
🎲 [**Dungeo\_ai**](https://github.com/Laszlobeer/Dungeo_ai/tree/main) – a fully local, AI-powered Dungeon Master designed for immersive solo RPGs, worldbuilding, and roleplay.
This project it's free and for now it connect to ollama(llm) and alltalktts(tts)
🛠️ What it can do:
* 💻 Runs entirely **locally** (with support for Ollama )
* 🧠 Persists memory, character state, and custom personalities
* 📜 Simulates D&D-like dialogue and encounters dynamically
* 🗺️ Expands lore over time with each interaction
* 🧙 Great for solo campaigns, worldbuilding, or even prototyping NPCs
It’s still early days, but it’s usable and growing. I’d love feedback, collab ideas, or even just to know what kind of characters *you’d* throw into it.
Here’s the link again:
👉 [https://github.com/Laszlobeer/Dungeo\_ai/tree/main](https://github.com/Laszlobeer/Dungeo_ai/tree/main)
Thanks for checking it out—and if you give it a spin, let me know how your first AI encounter goes. 😄
| 2025-06-12T15:50:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9pwk1/i_built_a_local_ai_dungeon_master_meet_dungeo_ai/
|
Reasonable_Brief578
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9pwk1
| false | null |
t3_1l9pwk1
|
/r/LocalLLaMA/comments/1l9pwk1/i_built_a_local_ai_dungeon_master_meet_dungeo_ai/
| false | false | 50 |
{'enabled': False, 'images': [{'id': '1nXVLMHSKoC4uc_XliGvQaC1UWwnoEqL3u0Xe2oRCf4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=108&crop=smart&auto=webp&s=6f5b026dfa350f996d44056b731cfee8edd67977', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=216&crop=smart&auto=webp&s=f91f07434a0a11e8a6e1f7d2951aa638cd053e5a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=320&crop=smart&auto=webp&s=761389f83a77fe8e2cb137fb952ef0b575984fa0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=640&crop=smart&auto=webp&s=78f390b5728c07a17be669aafa3632a5c3408b7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=960&crop=smart&auto=webp&s=b9dd0316a233d426fc5165f46a407f10ea7ea65b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?width=1080&crop=smart&auto=webp&s=1de6bb21b461a3cc135f68808ae7373d2f325117', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VKXieTbOFzGU2aZe9EDyviI58NBmBHkcxoJcwzIpt0A.jpg?auto=webp&s=aa47aecc495fa78495eedd47bfa35d9a6c4b81f9', 'width': 1200}, 'variants': {}}]}
|
|
I made a very cool (free) iOS app. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your Mac.
| 1 |
[removed]
| 2025-06-12T15:51:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9px3z/i_made_a_very_cool_free_ios_app_its_a_chatbot/
|
Valuable-Run2129
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9px3z
| false | null |
t3_1l9px3z
|
/r/LocalLLaMA/comments/1l9px3z/i_made_a_very_cool_free_ios_app_its_a_chatbot/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=216&crop=smart&auto=webp&s=e2e7e24a974081f4dd971cadad2c8e920e2dfea9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=320&crop=smart&auto=webp&s=06f8bc70517305d0fedecbdbecf094ec88d2c043', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=640&crop=smart&auto=webp&s=af1f408e55328e95eb4d4eded7df2056e55f0c96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=960&crop=smart&auto=webp&s=6fea42225e35e2f92e14b0bbda042100f700f112', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=1080&crop=smart&auto=webp&s=c56a70dcc06af16fb5fb9a2aa7c1a950003872e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?auto=webp&s=925a97e87100011d000d3cb8bc7058513f5e2337', 'width': 1200}, 'variants': {}}]}
|
Open Source, free iOS Chatbot that you can use away from home to interact with an LLM that runs locally on your Mac at home.
| 1 |
[removed]
| 2025-06-12T16:00:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9q5dc/open_source_free_ios_chatbot_that_you_can_use/
|
Valuable-Run2129
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9q5dc
| false | null |
t3_1l9q5dc
|
/r/LocalLLaMA/comments/1l9q5dc/open_source_free_ios_chatbot_that_you_can_use/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=216&crop=smart&auto=webp&s=e2e7e24a974081f4dd971cadad2c8e920e2dfea9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=320&crop=smart&auto=webp&s=06f8bc70517305d0fedecbdbecf094ec88d2c043', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=640&crop=smart&auto=webp&s=af1f408e55328e95eb4d4eded7df2056e55f0c96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=960&crop=smart&auto=webp&s=6fea42225e35e2f92e14b0bbda042100f700f112', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=1080&crop=smart&auto=webp&s=c56a70dcc06af16fb5fb9a2aa7c1a950003872e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?auto=webp&s=925a97e87100011d000d3cb8bc7058513f5e2337', 'width': 1200}, 'variants': {}}]}
|
I made a free, open source iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your desktop Mac.
| 1 |
[removed]
| 2025-06-12T16:03:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9q87m/i_made_a_free_open_source_ios_app_for_this/
|
Valuable-Run2129
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9q87m
| false | null |
t3_1l9q87m
|
/r/LocalLLaMA/comments/1l9q87m/i_made_a_free_open_source_ios_app_for_this/
| false | false |
self
| 1 | null |
Optimal server for inference with large models
| 1 |
[removed]
| 2025-06-12T16:06:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9qayp/optimal_server_for_inference_with_large_models/
|
slavik-f
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9qayp
| false | null |
t3_1l9qayp
|
/r/LocalLLaMA/comments/1l9qayp/optimal_server_for_inference_with_large_models/
| false | false |
self
| 1 | null |
I made an open source, free iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your Mac.
| 1 |
[removed]
| 2025-06-12T16:13:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9qhsf/i_made_an_open_source_free_ios_app_for_this/
|
matteoianni
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9qhsf
| false | null |
t3_1l9qhsf
|
/r/LocalLLaMA/comments/1l9qhsf/i_made_an_open_source_free_ios_app_for_this/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=108&crop=smart&auto=webp&s=590963253e1d7889f33d6e5bea945d1cb0f3a4be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=216&crop=smart&auto=webp&s=a43f786fb00d02bfba139de029d15a5377575f3e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=320&crop=smart&auto=webp&s=5fe5d17f0fb08aff249f76cb24294ee63db95a50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=640&crop=smart&auto=webp&s=079042269001a3d0e59a82bb48664eaa4c89d049', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=960&crop=smart&auto=webp&s=51591a4245852d2891f48bf5504e0eb86c2682d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?width=1080&crop=smart&auto=webp&s=c7116d8eaf6fc13c370569f120c6831e0cd6d5dc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU.png?auto=webp&s=5863e3d0ba5266774e53c205c36ef44459467636', 'width': 1200}, 'variants': {}}]}
|
I made an open source (free) iOS app for this community. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your desktop Mac.
| 1 |
[removed]
| 2025-06-12T16:24:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9qrwb/i_made_an_open_source_free_ios_app_for_this/
|
Valuable-Run2129
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9qrwb
| false | null |
t3_1l9qrwb
|
/r/LocalLLaMA/comments/1l9qrwb/i_made_an_open_source_free_ios_app_for_this/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '_P7aq5FxwSERQmFp1wBuNrGdpYo8M0NcjHGzxEJkCSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=108&crop=smart&auto=webp&s=01032f9f1c83429c54cd66c3de219a1dacb3bbb0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=216&crop=smart&auto=webp&s=e2e7e24a974081f4dd971cadad2c8e920e2dfea9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=320&crop=smart&auto=webp&s=06f8bc70517305d0fedecbdbecf094ec88d2c043', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=640&crop=smart&auto=webp&s=af1f408e55328e95eb4d4eded7df2056e55f0c96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=960&crop=smart&auto=webp&s=6fea42225e35e2f92e14b0bbda042100f700f112', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?width=1080&crop=smart&auto=webp&s=c56a70dcc06af16fb5fb9a2aa7c1a950003872e7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N6fBtvNZvzprN51ltQmbLyZKOr7u22nka_5L17e-60c.jpg?auto=webp&s=925a97e87100011d000d3cb8bc7058513f5e2337', 'width': 1200}, 'variants': {}}]}
|
Seedance 1.0
| 8 | 2025-06-12T16:47:04 |
https://seed.bytedance.com/en/seedance
|
kamikazechaser
|
seed.bytedance.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9rcoj
| false | null |
t3_1l9rcoj
|
/r/LocalLLaMA/comments/1l9rcoj/seedance_10/
| false | false |
default
| 8 | null |
|
Qwen3-72B-Embiggened
| 175 | 2025-06-12T16:49:08 |
https://huggingface.co/cognitivecomputations/Qwen3-72B-Embiggened
|
TKGaming_11
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9rejn
| false | null |
t3_1l9rejn
|
/r/LocalLLaMA/comments/1l9rejn/qwen372bembiggened/
| false | false | 175 |
{'enabled': False, 'images': [{'id': '3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=108&crop=smart&auto=webp&s=7e15934d9ca0b81ee373ab4d5a0a90ea09a30c12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=216&crop=smart&auto=webp&s=49d01bd57b9a04ebf9846ac496a608e26edea504', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=320&crop=smart&auto=webp&s=dcebb00602361f6a77e9332b97ea4531411382fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=640&crop=smart&auto=webp&s=caeaed61b9102d58a91296d431d16a9370486b24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=960&crop=smart&auto=webp&s=d3d1f803f8e0dcbda246d59faef894a5da33c11e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?width=1080&crop=smart&auto=webp&s=37f7443df0f645cf0c802d74060ba75a77a5b55d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3jemtoTl3dbvGWwls0qD8rxMoJ2jFMtej9rCleQmntc.png?auto=webp&s=7aa83dd77b5eff5bb967bcb27141cf8b8f6a8b98', 'width': 1200}, 'variants': {}}]}
|
||
The guide to building MCP agents using OpenAI Agents SDK
| 0 |
Building MCP agents felt a little complex to me, so I took some time to learn about it and created a [free guide](https://levelup.gitconnected.com/the-complete-guide-to-building-mcp-agents-ec877f30136d?source=friends_link&sk=f97341c5b0f7cfb735cc49749fa88f32). Covered the following topics in detail.
1. Brief overview of MCP (with core components)
2. The architecture of MCP Agents
3. Created a list of all the frameworks & SDKs available to build MCP Agents (such as OpenAI Agents SDK, MCP Agent, Google ADK, CopilotKit, LangChain MCP Adapters, PraisonAI, Semantic Kernel, Vercel SDK, ....)
4. A step-by-step guide on how to build your first MCP Agent using [OpenAI Agents SDK](https://openai.github.io/openai-agents-python/). Integrated with GitHub to create an issue on the repo from the terminal (source code + complete flow)
5. Two more practical examples in the last section:
\- first one uses the MCP Agent framework (by lastmile ai) that looks up a file, reads a blog and writes a tweet
\- second one uses the OpenAI Agents SDK which is integrated with Gmail to send an email based on the task instructions
Would appreciate your feedback, especially if there’s anything important I have missed or misunderstood.
| 2025-06-12T16:58:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9rnep/the_guide_to_building_mcp_agents_using_openai/
|
anmolbaranwal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9rnep
| false | null |
t3_1l9rnep
|
/r/LocalLLaMA/comments/1l9rnep/the_guide_to_building_mcp_agents_using_openai/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'R9A5HK-2j0v7yNVydxgAeECVZ8i4g3qfNDvZfNb-aDM', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=108&crop=smart&auto=webp&s=9e00f69182dd7b561b6baf4fdada6dd716d8a3d5', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=216&crop=smart&auto=webp&s=799fb0068c97686f6eeeb0e583d8f1899fa1b0da', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=320&crop=smart&auto=webp&s=fd2dd05c91d1a4deb0db648e9db5ae26905c81d1', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=640&crop=smart&auto=webp&s=4bbd731c1242bd6bc6bf1f05498e7ec7923501a1', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=960&crop=smart&auto=webp&s=f4cc0056de8703fc2ada9eb835aeb0d3760b771e', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?width=1080&crop=smart&auto=webp&s=050963d9c9b00f681584e1170d84681170c7f0a0', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/5ajYo5dYXk8vYcgCyzN0m0_1xW85yuW2HvFPuWd2_s8.jpg?auto=webp&s=ed9cc769e55177447337008949b7c2bf4d5106c6', 'width': 1200}, 'variants': {}}]}
|
devstral does not code in c++
| 1 |
[removed]
| 2025-06-12T17:53:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9t2ap/devstral_does_not_code_in_c/
|
akierum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9t2ap
| false | null |
t3_1l9t2ap
|
/r/LocalLLaMA/comments/1l9t2ap/devstral_does_not_code_in_c/
| false | false |
self
| 1 | null |
Why Search Sucks! (But First, A Brief History)
| 1 | 2025-06-12T18:13:54 |
https://youtu.be/vZVcBUnre-c
|
kushalgoenka
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9tl7m
| false |
{'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZVcBUnre-c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Why Search Sucks! (But First, A Brief History)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/vZVcBUnre-c/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Why Search Sucks! (But First, A Brief History)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1l9tl7m
|
/r/LocalLLaMA/comments/1l9tl7m/why_search_sucks_but_first_a_brief_history/
| false | false |
default
| 1 | null |
|
Media Request - USPTO RFI for AI tools
| 1 |
[removed]
| 2025-06-12T18:37:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9u767/media_request_uspto_rfi_for_ai_tools/
|
IPreporter999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9u767
| false | null |
t3_1l9u767
|
/r/LocalLLaMA/comments/1l9u767/media_request_uspto_rfi_for_ai_tools/
| false | false |
self
| 1 | null |
Mixed GPU inference
| 16 |
Decided to hop on the RTX 6000 PRO bandwagon. Now my question is can I run inference accross 3 different cards say for example the 6000, a 4090 and a 3090 (144gb VRAM total) using ollama? Are there any issues or downsides with doing this?
Also bonus question big parameter model with low precision quant or full precision with lower parameter count model which wins out?
| 2025-06-12T18:38:38 |
https://www.reddit.com/gallery/1l9u8fv
|
cruzanstx
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9u8fv
| false | null |
t3_1l9u8fv
|
/r/LocalLLaMA/comments/1l9u8fv/mixed_gpu_inference/
| false | false | 16 |
{'enabled': True, 'images': [{'id': 'BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=108&crop=smart&auto=webp&s=e5d6937ac09c025d4936cb530f4ba964e537d0b0', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=216&crop=smart&auto=webp&s=d8c6bf7b341f8dade8c59d096a159b158fa0c072', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=320&crop=smart&auto=webp&s=15a1a1d10cd699a30b54beda4f2c8bc3e4d340e9', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=640&crop=smart&auto=webp&s=eecd299a30b66ed47b1f46535171f8a223910e96', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=960&crop=smart&auto=webp&s=0c313e2c5c5021bdf0a661129771bd9c45c36486', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?width=1080&crop=smart&auto=webp&s=ecbf616691fc9b951600558f376195ca4baa1fe5', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://external-preview.redd.it/BoDQClSCUKFzanPYb-kVp5_IXmAiaMDddF12wT1MC94.jpeg?auto=webp&s=605710d4c31dd579bb63823d6372f4e43d58d4ad', 'width': 3000}, 'variants': {}}]}
|
|
Drummer's Agatha 111B v1 - Command A tune with less positivity and better creativity!
| 1 |
[removed]
| 2025-06-12T18:41:41 |
https://huggingface.co/TheDrummer/Agatha-111B-v1
|
TheLocalDrummer
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ubdy
| false | null |
t3_1l9ubdy
|
/r/LocalLLaMA/comments/1l9ubdy/drummers_agatha_111b_v1_command_a_tune_with_less/
| false | false |
default
| 1 | null |
Drummer's Agatha 111B v1 - Command A tune with less positivity and better creativity!
| 48 |
PSA! My testers at BeaverAI are pooped!
Cydonia needs your help! We're looking to release a v3.1 but came up with several candidates with their own strengths and weaknesses. They've all got tons of potential but we can only have ONE v3.1.
Help me pick the winner from these:
* [https://huggingface.co/BeaverAI/Cydonia-24B-v3j-GGUF](https://huggingface.co/BeaverAI/Cydonia-24B-v3j-GGUF)
* [https://huggingface.co/BeaverAI/Cydonia-24B-v3i-GGUF](https://huggingface.co/BeaverAI/Cydonia-24B-v3i-GGUF)
* [https://huggingface.co/BeaverAI/Cydonia-24B-v3h-GGUF](https://huggingface.co/BeaverAI/Cydonia-24B-v3h-GGUF) (May ignore?)
* [https://huggingface.co/BeaverAI/Cydonia-24B-v3g-GGUF](https://huggingface.co/BeaverAI/Cydonia-24B-v3g-GGUF)
| 2025-06-12T18:43:11 |
https://huggingface.co/TheDrummer/Agatha-111B-v1
|
TheLocalDrummer
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ucsv
| false | null |
t3_1l9ucsv
|
/r/LocalLLaMA/comments/1l9ucsv/drummers_agatha_111b_v1_command_a_tune_with_less/
| false | false |
default
| 48 |
{'enabled': False, 'images': [{'id': '8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=108&crop=smart&auto=webp&s=dda555f95854492ea9240e78b5828951fa764ca9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=216&crop=smart&auto=webp&s=86ec0adb464b303fc20fb51d87440201af7914b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=320&crop=smart&auto=webp&s=218502cd9db037a37205978828dbf5e97e8745a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=640&crop=smart&auto=webp&s=a9aa0d10c56f1afbb644a1a90d88ca6c1bbc9317', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=960&crop=smart&auto=webp&s=8d04d28c9694e9feffabc80e5565c1e9b9ac6733', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?width=1080&crop=smart&auto=webp&s=d4959ca0a10be818cc0cb2f43be07d017989f4d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SUvc_SntqJPYJMpYlLHwIvKojeS37Q9MlW_-GMIUcs.png?auto=webp&s=7f2e61b7767af590a49af389de79afc59112b906', 'width': 1200}, 'variants': {}}]}
|
What enterprise LLM platforms or AI tools are best for internal use cases like compliance automation, wholesaler enablement, and document intelligence?
| 1 |
[removed]
| 2025-06-12T18:44:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9uegm/what_enterprise_llm_platforms_or_ai_tools_are/
|
InvestedThinkers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9uegm
| false | null |
t3_1l9uegm
|
/r/LocalLLaMA/comments/1l9uegm/what_enterprise_llm_platforms_or_ai_tools_are/
| false | false |
self
| 1 | null |
media request - USPTO/AI
| 1 |
[removed]
| 2025-06-12T18:45:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9ueyt/media_request_usptoai/
|
IPreporter999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ueyt
| false | null |
t3_1l9ueyt
|
/r/LocalLLaMA/comments/1l9ueyt/media_request_usptoai/
| false | false |
self
| 1 | null |
🚀 Hooshyar AI — Building a Fully Local, Privacy-First AI Personal Assistant (Looking for Support & Collaborators!)
| 1 |
[removed]
| 2025-06-12T18:47:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9ugkk/hooshyar_ai_building_a_fully_local_privacyfirst/
|
CookieFar26
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9ugkk
| false | null |
t3_1l9ugkk
|
/r/LocalLLaMA/comments/1l9ugkk/hooshyar_ai_building_a_fully_local_privacyfirst/
| false | false |
self
| 1 | null |
inclusionAI/Ming-Lite-Omni · Hugging Face
| 35 | 2025-06-12T18:54:32 |
https://huggingface.co/inclusionAI/Ming-Lite-Omni
|
ninjasaid13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9uncm
| false | null |
t3_1l9uncm
|
/r/LocalLLaMA/comments/1l9uncm/inclusionaimingliteomni_hugging_face/
| false | false |
default
| 35 |
{'enabled': False, 'images': [{'id': 'hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=108&crop=smart&auto=webp&s=cc1210fcb70a213cc463f1e58b0ef0fc196a1fe9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=216&crop=smart&auto=webp&s=ff7f66c844d410c8f1e96d8c4071bfda8355afa3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=320&crop=smart&auto=webp&s=3303f5cbe6a64d7b77779ad09669147aa8445bf6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=640&crop=smart&auto=webp&s=ea5ec9f606dd6d8a37d1d66a5b63c2288104ccd2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=960&crop=smart&auto=webp&s=33c84f7fd55d59214134225f2734ea3f670e0eb7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?width=1080&crop=smart&auto=webp&s=c12f16b2ff89a9e8d0799c4ada2662fe2e0850d5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hqIT61mJS1jkOJ-LQTx7HDscDPEojRnEkuy3JGbC7-8.png?auto=webp&s=2f20dde1a74192d39b3f42709a96439c444569c4', 'width': 1200}, 'variants': {}}]}
|
|
Apple be like...
| 0 | 2025-06-12T18:59:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9urju/apple_be_like/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9urju
| false | null |
t3_1l9urju
|
/r/LocalLLaMA/comments/1l9urju/apple_be_like/
| false | false | 0 | null |
||
Well...
| 0 | 2025-06-12T19:01:44 |
Mr_Moonsilver
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9uu58
| false | null |
t3_1l9uu58
|
/r/LocalLLaMA/comments/1l9uu58/well/
| false | false |
default
| 0 |
{'enabled': True, 'images': [{'id': 'fc0apk3mnj6f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=108&crop=smart&auto=webp&s=e3d14836d55d07cdf407ea46b382bc6f81dfa045', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=216&crop=smart&auto=webp&s=34d8157ec79a8ddab7a210211717f92f715995f7', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=320&crop=smart&auto=webp&s=bd38e827990815af5711a7c9044b1b201913d366', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=640&crop=smart&auto=webp&s=95b3e2efd517764e0ab5eda32ad649d8e300a263', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?width=960&crop=smart&auto=webp&s=d1a8a1d47f43f9ac3e19005c6874f962567b39d9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/fc0apk3mnj6f1.png?auto=webp&s=4524392884685c4195b2b31a7642011da1c7bf6e', 'width': 1024}, 'variants': {}}]}
|
||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
| 1 |
[removed]
| 2025-06-12T19:16:15 |
https://www.reddit.com/gallery/1l9v7j0
|
Heralax_Tekran
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9v7j0
| false | null |
t3_1l9v7j0
|
/r/LocalLLaMA/comments/1l9v7j0/augmentoolkit_30_7_months_of_work_mit_license/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM', 'resolutions': [{'height': 132, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=108&crop=smart&auto=webp&s=747bf6ef5a57c27dba3bb46dd4baa09d8ef755a2', 'width': 108}, {'height': 264, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=216&crop=smart&auto=webp&s=c08628418e6b1c8dee09bd5b0f39470b3c079684', 'width': 216}, {'height': 392, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=320&crop=smart&auto=webp&s=4d997b61f48250983ea42e883265380ab8f60656', 'width': 320}, {'height': 784, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=640&crop=smart&auto=webp&s=6424267f56054163796ca94f288c25c1e79d29d7', 'width': 640}, {'height': 1177, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=960&crop=smart&auto=webp&s=53b8f178c8c7186d0e06b80d166d3f2217a79fb0', 'width': 960}, {'height': 1324, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?width=1080&crop=smart&auto=webp&s=a1ec25ae5a4e62bbec06509bcc419c1fba996638', 'width': 1080}], 'source': {'height': 2394, 'url': 'https://external-preview.redd.it/BfvIL-pt7XSLVz72I36FgAs_zcgpOxo8QHhEqNhdedM.jpeg?auto=webp&s=7fb252823ced711cd33b29ee040429059d8be222', 'width': 1952}, 'variants': {}}]}
|
|
Guide: Install llama.cpp with rocm support on opensuse tumbleweed
| 1 |
[removed]
| 2025-06-12T19:34:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9vnvt/guide_install_llamacpp_with_rocm_support_on/
|
rohan-sircar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9vnvt
| false | null |
t3_1l9vnvt
|
/r/LocalLLaMA/comments/1l9vnvt/guide_install_llamacpp_with_rocm_support_on/
| false | false |
self
| 1 | null |
I love SillyTavern, but my friends hate me for recommending it
| 1 |
[removed]
| 2025-06-12T19:35:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9vovt/i_love_sillytavern_but_my_friends_hate_me_for/
|
RIPT1D3_Z
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9vovt
| false | null |
t3_1l9vovt
|
/r/LocalLLaMA/comments/1l9vovt/i_love_sillytavern_but_my_friends_hate_me_for/
| false | false |
self
| 1 | null |
Best Model/Hardware for coding locally - $2-$3k budget
| 5 |
Looking to use Roo Code with a locally hosted LLM.
Would like to get thoughts on what hardware and model to look at with a budget of about $2k - $3k.
I understand that this is somewhat of a heated topic at the moment, so I'm looking to ideally hear from folks who are doing local coding with this type of setup in this price range.
| 2025-06-12T19:36:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9vpzj/best_modelhardware_for_coding_locally_23k_budget/
|
G3rmanaviator
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9vpzj
| false | null |
t3_1l9vpzj
|
/r/LocalLLaMA/comments/1l9vpzj/best_modelhardware_for_coding_locally_23k_budget/
| false | false |
self
| 5 | null |
Meta Is Offering Nine Figure Salaries to Build Superintelligent AI. Mark going All In.
| 287 |
https://www.entrepreneur.com/business-news/meta-is-offering-nine-figure-pay-for-superintelligence-team/493040
| 2025-06-12T20:00:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9wbaw/meta_is_offering_nine_figure_salaries_to_build/
|
Neon_Nomad45
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9wbaw
| false | null |
t3_1l9wbaw
|
/r/LocalLLaMA/comments/1l9wbaw/meta_is_offering_nine_figure_salaries_to_build/
| false | false |
self
| 287 |
{'enabled': False, 'images': [{'id': 'Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=108&crop=smart&auto=webp&s=9c96066f5b6b9c6e530c5980ef18454fc769203f', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=216&crop=smart&auto=webp&s=57f1e6401ad336dc47bdd58286a13cb6a24ee454', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=320&crop=smart&auto=webp&s=b619140bd0219b3ece2ee860fdede14ebd6af4db', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=640&crop=smart&auto=webp&s=084b3d06ffb491f3e3be7eb0a43e9e2f6e135759', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=960&crop=smart&auto=webp&s=77a2cb0adb5bc27faba9a8293ee0037029f58e41', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?width=1080&crop=smart&auto=webp&s=ed45cdf0d9dde368cb526c68c0029a2d30a2f099', 'width': 1080}], 'source': {'height': 1333, 'url': 'https://external-preview.redd.it/Z9igJMl8_hdPOw6sIPbW75yCHdd4CNTLpS3YTLiM7Go.jpeg?auto=webp&s=6a6a603b200e55590cc9e32a19d6aec877a252f2', 'width': 2000}, 'variants': {}}]}
|
I wanted to ask what you mainly use locally served models for?
| 1 |
[removed]
| 2025-06-12T20:01:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9wc3b/i_wanted_to_ask_what_you_mainly_use_locally/
|
Repsol_Honda_PL
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9wc3b
| false | null |
t3_1l9wc3b
|
/r/LocalLLaMA/comments/1l9wc3b/i_wanted_to_ask_what_you_mainly_use_locally/
| false | false |
self
| 1 | null |
Do mini PCs provide a superb LLM inference chance ?
| 1 |
[removed]
| 2025-06-12T20:12:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9wlxq/do_mini_pcs_provide_a_superb_llm_inference_chance/
|
Highwaytothebeach
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9wlxq
| false | null |
t3_1l9wlxq
|
/r/LocalLLaMA/comments/1l9wlxq/do_mini_pcs_provide_a_superb_llm_inference_chance/
| false | false |
self
| 1 | null |
Why No One Is Using Mamba Anymore
| 1 |
[removed]
| 2025-06-12T20:35:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9x6gt/why_no_one_is_using_mamba_anymore/
|
paranoidray
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9x6gt
| false | null |
t3_1l9x6gt
|
/r/LocalLLaMA/comments/1l9x6gt/why_no_one_is_using_mamba_anymore/
| false | false |
self
| 1 | null |
First PC Build for AI & Gaming - Advice Needed
| 1 |
[removed]
| 2025-06-12T20:45:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9xfh7/first_pc_build_for_ai_gaming_advice_needed/
|
Lufi_parrot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9xfh7
| false | null |
t3_1l9xfh7
|
/r/LocalLLaMA/comments/1l9xfh7/first_pc_build_for_ai_gaming_advice_needed/
| false | false |
self
| 1 | null |
Will this LLM setup work on both Linux and Windows?
| 1 |
[removed]
| 2025-06-12T20:54:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9xn41/will_this_llm_setup_work_on_both_linux_and_windows/
|
Highwaytothebeach
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9xn41
| false | null |
t3_1l9xn41
|
/r/LocalLLaMA/comments/1l9xn41/will_this_llm_setup_work_on_both_linux_and_windows/
| false | false |
self
| 1 | null |
Cheapest way to run 32B model?
| 35 |
Id like to build a home server for my family to use llms that we can actually control. I know how to setup a local server and make it run etc but I'm having trouble keeping up with all the new hardware coming out.
What's the best bang for the buck for a 32b model right now? Id rather have a low power consumption solution. The way id do it is with rtx 3090s but with all the new npus and unified memory and all that, I'm wondering if it's still the best option.
| 2025-06-12T20:55:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9xnt7/cheapest_way_to_run_32b_model/
|
GreenTreeAndBlueSky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9xnt7
| false | null |
t3_1l9xnt7
|
/r/LocalLLaMA/comments/1l9xnt7/cheapest_way_to_run_32b_model/
| false | false |
self
| 35 | null |
Ready Player Own: Building The Box Before Big Tech Does
| 1 |
[removed]
| 2025-06-12T21:09:56 |
https://medium.com/@vanuan/ready-player-own-building-the-box-before-big-tech-does-537e2b879de7
|
Single-Blackberry866
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9y0ua
| false | null |
t3_1l9y0ua
|
/r/LocalLLaMA/comments/1l9y0ua/ready_player_own_building_the_box_before_big_tech/
| false | false |
default
| 1 | null |
Ready Player Own: Building The Box Before Big Tech Does
| 1 | 2025-06-12T21:14:37 |
https://medium.com/@vanuan/ready-player-own-building-the-box-before-big-tech-does-537e2b879de7
|
Single-Blackberry866
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9y4wt
| false | null |
t3_1l9y4wt
|
/r/LocalLLaMA/comments/1l9y4wt/ready_player_own_building_the_box_before_big_tech/
| false | false |
default
| 1 | null |
|
Any known VPS with AMD gpus at "reasonable" prices?
| 1 |
[removed]
| 2025-06-12T21:29:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9yi6m/any_known_vps_with_amd_gpus_at_reasonable_prices/
|
daddyodevil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9yi6m
| false | null |
t3_1l9yi6m
|
/r/LocalLLaMA/comments/1l9yi6m/any_known_vps_with_amd_gpus_at_reasonable_prices/
| false | false |
self
| 1 | null |
Is AMD Ryzen AI Max+ 395 really the only consumer option for running Llama 70B locally?
| 44 |
Researching hardware for Llama 70B and keep hitting the same conclusion. AMD Ryzen AI Max+ 395 in Framework Desktop with 128GB unified memory seems like the only consumer device that can actually run 70B locally.
RTX 4090 maxes at 24GB, Jetson AGX Orin hits 64GB, everything else needs rack servers with cooling and noise. The Framework setup should handle 70B in a quiet desktop form factor for around $3,000.
Is there something I'm missing? Other consumer hardware with enough memory? Anyone running 70B on less memory with extreme tricks? Or is 70B overkill vs 13B/30B for local use?
Reports say it should output 4-8 tokens per second, which seems slow for this price tag.
Are my expectations too high? Any catch with this AMD solution?
| 2025-06-12T21:32:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9yk8v/is_amd_ryzen_ai_max_395_really_the_only_consumer/
|
Single-Blackberry866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9yk8v
| false | null |
t3_1l9yk8v
|
/r/LocalLLaMA/comments/1l9yk8v/is_amd_ryzen_ai_max_395_really_the_only_consumer/
| false | false |
self
| 44 | null |
Moving on from Ollama
| 22 |
I'm on a Mac with 128GB RAM and have been enjoying Ollama, I'm technical and comfortable in the CLI. What is the next step (not closed src like LMStudio), in order to have more freedom with LLMs.
Should I move to using Llama.cpp directly or what are people using?
Also what are you fav models atm?
| 2025-06-12T21:51:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9z0su/moving_on_from_ollama/
|
john_alan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9z0su
| false | null |
t3_1l9z0su
|
/r/LocalLLaMA/comments/1l9z0su/moving_on_from_ollama/
| false | false |
self
| 22 | null |
KwaiCoder-AutoThink-preview is a Good Model for Creative Writing! Any Idea about Coding and Math? Your Thoughts?
| 4 |
[https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview](https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview)
Guys, you should try KwaiCoder-AutoThink-preview.
It's an awesome model. I played with it and tested it's reasoning and creativity, and I am impressed.
It feels like it's a system of 2 models where one reads the prompts (the Judge) and decide whether to spend tokens of thinking or not. The second model (the Thinker), which could be a fine-tune of QwQ-32B thinks and output the text.
I love it's generation in creative writing. Could someone use it for code and tell me how it fares against other 30-40B models?
I am using the Q4\_0 of [https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF](https://huggingface.co/mradermacher/KwaiCoder-AutoThink-preview-GGUF) with RTX3090
https://preview.redd.it/pwlal86khk6f1.png?width=891&format=png&auto=webp&s=355689b9c3d1aa928cd20301b51344cd5f3acc2e
For some reason, it uses Llama-2 chat format. So, if you are using LM Studio, make sure to use it.
https://preview.redd.it/xy0zzl9lhk6f1.png?width=448&format=png&auto=webp&s=c9adc917f5e647466677d593ac737ca6c801169e
| 2025-06-12T21:53:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9z1ts/kwaicoderautothinkpreview_is_a_good_model_for/
|
Iory1998
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9z1ts
| false | null |
t3_1l9z1ts
|
/r/LocalLLaMA/comments/1l9z1ts/kwaicoderautothinkpreview_is_a_good_model_for/
| false | false | 4 |
{'enabled': False, 'images': [{'id': 'eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=108&crop=smart&auto=webp&s=8675a2ccd80f59585366e3c76f089100d181f4cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=216&crop=smart&auto=webp&s=61faa5e6cf9222ce60023fb2c43edc49a4e332fc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=320&crop=smart&auto=webp&s=65d06827a36ccd11044dd529fcc0da4a9716f9f4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=640&crop=smart&auto=webp&s=a71a8affc3690f4e2332b1c4a539f6b466d73b43', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=960&crop=smart&auto=webp&s=c41f28f5c27772609efbebdcf101c7d3a7af8070', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=1080&crop=smart&auto=webp&s=e8c1a18061db391e249976f5b1406ab92b479c4b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?auto=webp&s=f6fe09a92c2cb4fa8e803ba6b5d829714289c9c9', 'width': 1200}, 'variants': {}}]}
|
|
New Agent Creator Tutorial!! with Observer AI 🚀
| 1 |
Hey guys! first of all I wanted to thank you all for the amazing support that this community has given me, I added a lot of features to Observer AI:
\* AI Agent Builder
\* Template Agent Builder
\* SMS message notifications
\* Camera input
\* Microphone input (still needs work)
\* Whatsapp message notifiaction (rolled back but coming soon!, still needs work, got Meta account flagged for spam hahaha)
\* Computer audio transcription (beta, coming soon!)
Please check it out at [app.observer-ai.com](http://app.observer-ai.com), the project is 100% Open Source, and you can run it locally! (inference with ollama and webapp) [github.com/Roy3838/Observer](http://github.com/Roy3838/Observer)
Thanks so much for the support that you've given me, i'm really proud to show you this version!
If you have any feedback or questions please reach out!
| 2025-06-12T21:53:49 |
https://v.redd.it/x7fwtqxehk6f1
|
Roy3838
|
/r/LocalLLaMA/comments/1l9z2i6/new_agent_creator_tutorial_with_observer_ai/
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9z2i6
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x7fwtqxehk6f1/DASHPlaylist.mpd?a=1752486835%2CMTE1MWU5NjU4NGQxMjA2ZmQ0YjM2ZjA5YjU0MjQ0NzI1NTMyNTNiNDA3NjhmNzhlZjBmZDI1OWI1Njg1YTI5NQ%3D%3D&v=1&f=sd', 'duration': 151, 'fallback_url': 'https://v.redd.it/x7fwtqxehk6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/x7fwtqxehk6f1/HLSPlaylist.m3u8?a=1752486835%2CZjkxY2U0ZWI0NGM4YmZkYjRkZmEyOWM0MjJhYWNkYWQwYzVlMjU0ZGZiMzE3MjhlMjdhZDU3NjE1Y2U5NzU2YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x7fwtqxehk6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1l9z2i6
|
/r/LocalLLaMA/comments/1l9z2i6/new_agent_creator_tutorial_with_observer_ai/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=108&crop=smart&format=pjpg&auto=webp&s=07e1b9e7c467ee5655f1f0b5325b7398863e83f6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=216&crop=smart&format=pjpg&auto=webp&s=7b421aced91bfce3e2bc683ebdc7812db9efd1c1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=320&crop=smart&format=pjpg&auto=webp&s=f37cc3c602f77c5932ab8cbe03f69e9dfdad4b24', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=640&crop=smart&format=pjpg&auto=webp&s=0e2c858e77d69eb87adf9430fe8bbc73c7e7a5bb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=960&crop=smart&format=pjpg&auto=webp&s=7fac5f73710d8ad0477ae366b83f01add88e05cc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4eaf4dd2899387e482f6162eaed9da1cb5d53b25', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?format=pjpg&auto=webp&s=d7f1130178ea019438a9f56310dff043c9bed4a8', 'width': 1920}, 'variants': {}}]}
|
|
Run Perchance style RPG locally?
| 3 |
I like the clean UI and ease of use of Perchance's RPG story. It's also pretty good at creativity. Is it reasonably feasible to run something similar locally?
| 2025-06-12T22:06:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9zddh/run_perchance_style_rpg_locally/
|
BenefitOfTheDoubt_01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9zddh
| false | null |
t3_1l9zddh
|
/r/LocalLLaMA/comments/1l9zddh/run_perchance_style_rpg_locally/
| false | false |
self
| 3 | null |
Conversational Avatars
| 1 |
HeLLo aLL,
Does anybody know a tool or a workflow that could help me build a video avatar for a conversation bot? I figure some combination of existing tools makes this possible— I have the workflow built except for the video. Any recos? Thanks 🙏🏼
| 2025-06-12T22:27:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9zuzs/conversational_avatars/
|
JoshuaLandy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9zuzs
| false | null |
t3_1l9zuzs
|
/r/LocalLLaMA/comments/1l9zuzs/conversational_avatars/
| false | false |
self
| 1 | null |
What's your Local Vision Model Rankings and local Benchmarks for them?
| 3 |
It's obvious were the text2text models are in terms of ranking. We all know for example that deepseek-r1-0528 > deepseek-v3-0324 \~ Qwen3-253B > llama3.3-70b \~ gemma-3-27b > mistral-small-24b
We also have all the home grown "evals" that we throw at these models, boucing ball in a heptagon, move the ball in a cup, cross the river, flappybird, etc.
Yeah, it's not clear the ranking of the image+text 2 text models, and no "standard home grown benchmarks"
So for those playing with these, how do you rank them and if you have prompts you use to benchmark, care to share? you don't need to share the image but you can describe the image.
| 2025-06-12T22:32:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1l9zym8/whats_your_local_vision_model_rankings_and_local/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1l9zym8
| false | null |
t3_1l9zym8
|
/r/LocalLLaMA/comments/1l9zym8/whats_your_local_vision_model_rankings_and_local/
| false | false |
self
| 3 | null |
queen 3 30b a3b experience and questions
| 1 |
[removed]
| 2025-06-12T22:52:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1la0f80/queen_3_30b_a3b_experience_and_questions/
|
Axotic69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la0f80
| false | null |
t3_1la0f80
|
/r/LocalLLaMA/comments/1la0f80/queen_3_30b_a3b_experience_and_questions/
| false | false |
self
| 1 | null |
What are peoples experience with old dual Xeon servers?
| 3 |
I recently found a used system for sale for a bit under 1000 bucks:
Dell Server R540 Xeon Dual 4110 256GB RAM 20TB
2x Intel Xeon 4110
256GB Ram
5x 4TB HDD
Raid Controler
1x 10GBE SFP+
2x 1GBE RJ45
IDRAC
2 PSUs for redundancy
100W idle 170 under load
Here are my theoretical performance calculations:
DDR4-2400 = 19.2 GB/s per channel
→ 6 channels × 19.2 GB/s = 115.2 GB/s per CPU
→ 2 CPUs = 230.4 GB/s total (theoretical maximum bandwidth)
At least in theory you could put q8 qwen 235b on it with 22b active parameters. Though q6 would make more sense for larger context.
22b at q8 ~ 22gb > 230/22=10,4 tokens/s
22b at q6 ~ 22b*0.75 byte=16.5 gb > 230/16.5=14 tokens/s
I know those numbers are unrealistic and honestly expect around 2/3 of that performance in real life but would like to know if someone has firsthand experience he could share?
In addition Qwen seems to work quite well with speculative decoding and I generally get a 10-25% performance increase depending on the prompts when using the 32b model with a 0.5b draft model. Does anyone have experience using speculative decoding on these much larger moe models?
| 2025-06-12T23:13:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1la0vz8/what_are_peoples_experience_with_old_dual_xeon/
|
Eden1506
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la0vz8
| false | null |
t3_1la0vz8
|
/r/LocalLLaMA/comments/1la0vz8/what_are_peoples_experience_with_old_dual_xeon/
| false | false |
self
| 3 | null |
llama.cpp adds support to two new quantization format, tq1_0 and tq2_0
| 96 |
which can be found at tools/convert\_hf\_to\_gguf.py on github.
tq means ternary quantization, what's this? is for consumer device?
| 2025-06-12T23:59:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1la1v4d/llamacpp_adds_support_to_two_new_quantization/
|
Remarkable-Pea645
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la1v4d
| false | null |
t3_1la1v4d
|
/r/LocalLLaMA/comments/1la1v4d/llamacpp_adds_support_to_two_new_quantization/
| false | false |
self
| 96 | null |
Best local LLM with strong instruction following for custom scripting language
| 3 |
I have a scripting language that I use that is “C-like”, but definitely not C. I’ve prompted 4o to successfully write code and now I want to run local.
What’s the best local LLM that would be close to 4o with instruction following that I could run on 96GB of GPU RAM (2xA6000 Ada).
Thanks!
| 2025-06-13T00:26:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1la2g1r/best_local_llm_with_strong_instruction_following/
|
Ill_Recipe7620
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la2g1r
| false | null |
t3_1la2g1r
|
/r/LocalLLaMA/comments/1la2g1r/best_local_llm_with_strong_instruction_following/
| false | false |
self
| 3 | null |
Claude Code but using local Gemma3
| 1 |
[removed]
| 2025-06-13T00:55:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1la31hu/claude_code_but_using_local_gemma3/
|
Willing-Policy6027
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la31hu
| false | null |
t3_1la31hu
|
/r/LocalLLaMA/comments/1la31hu/claude_code_but_using_local_gemma3/
| false | false |
self
| 1 | null |
Does unsuppervised generative AI model exist?
| 1 |
[removed]
| 2025-06-13T01:13:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1la3ede/does_unsuppervised_generative_ai_model_exist/
|
Exotic-Media5762
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3ede
| false | null |
t3_1la3ede
|
/r/LocalLLaMA/comments/1la3ede/does_unsuppervised_generative_ai_model_exist/
| false | false |
self
| 1 | null |
3.53bit Deepseek R1 0528 scores 68% which is in-between Sonnet 3.7 and Opus 4
| 1 |
[removed]
| 2025-06-13T01:29:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1la3q5q/353bit_deepseek_r1_0528_scores_68_which_is/
|
BumblebeeOk3281
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3q5q
| false | null |
t3_1la3q5q
|
/r/LocalLLaMA/comments/1la3q5q/353bit_deepseek_r1_0528_scores_68_which_is/
| false | false |
self
| 1 | null |
3.53bit Deepseek R1 0528 scores 68% on Aider Polygot
| 1 |
[removed]
| 2025-06-13T01:33:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1la3t1l/353bit_deepseek_r1_0528_scores_68_on_aider_polygot/
|
BumblebeeOk3281
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3t1l
| false | null |
t3_1la3t1l
|
/r/LocalLLaMA/comments/1la3t1l/353bit_deepseek_r1_0528_scores_68_on_aider_polygot/
| false | false |
self
| 1 | null |
3.53bit R1 0528 scores 68% on the Aider Polygot
| 67 |
3.53bit R1 0528 scores 68% on the Aider Polygot benchmark
ram/vram required: 300GB
context size used: 40960 with flash attention
────────────────────────────- dirname: 2025-06-11-04-03-18--unsloth-DeepSeek-R1-0528-GGUF-UD-Q3\_K\_XL
test\_cases: 225
model: openai/unsloth/DeepSeek-R1-0528-GGUF/UD-Q3\_K\_XL
edit\_format: diff
commit\_hash: 4c161f9-dirty
pass\_rate\_1: 32.9
pass\_rate\_2: 68.0
pass\_num\_1: 74
pass\_num\_2: 153
percent\_cases\_well\_formed: 96.4
error\_outputs: 15
num\_malformed\_responses: 15
num\_with\_malformed\_responses: 8
user\_asks: 72
lazy\_comments: 0
syntax\_errors: 0
indentation\_errors: 0
exhausted\_context\_windows: 0
prompt\_tokens: 2596907
completion\_tokens: 2297409
test\_timeouts: 2
total\_tests: 225
command: aider --model openai/unsloth/DeepSeek-R1-0528-GGUF/UD-Q3\_K\_XL
date: 2025-06-11
versions: [0.84.1.dev](http://0.84.1.dev)
seconds\_per\_case: 485.7
total\_cost: 0.0000
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
| 2025-06-13T01:36:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1la3uvz/353bit_r1_0528_scores_68_on_the_aider_polygot/
|
BumblebeeOk3281
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3uvz
| false | null |
t3_1la3uvz
|
/r/LocalLLaMA/comments/1la3uvz/353bit_r1_0528_scores_68_on_the_aider_polygot/
| true | false |
spoiler
| 67 | null |
Anyone tried using Pytorch/Huggingface models on new AMD mini pcs?
| 1 |
[removed]
| 2025-06-13T01:37:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1la3v98/anyone_tried_using_pytorchhuggingface_models_on/
|
daddyodevil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3v98
| false | null |
t3_1la3v98
|
/r/LocalLLaMA/comments/1la3v98/anyone_tried_using_pytorchhuggingface_models_on/
| false | false |
self
| 1 | null |
Happy Birthday Transformers!
| 63 | 2025-06-13T01:40:38 |
https://x.com/sksq96/status/1933335774100857090?s=46
|
sksq9
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1la3xni
| false | null |
t3_1la3xni
|
/r/LocalLLaMA/comments/1la3xni/happy_birthday_transformers/
| false | false |
default
| 63 |
{'enabled': False, 'images': [{'id': '5Cf5k8QXUtETtGaXKCuEtMDf2YvEV3aM9UbCLsKrRB4', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?width=108&crop=smart&auto=webp&s=edaec840bfecebd0cb214231a4edf870dae79563', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?width=216&crop=smart&auto=webp&s=38b6a70d40672a420b51281c190a6ab3a5d3a3b8', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?width=320&crop=smart&auto=webp&s=c56268b4479bffdf3f386699c3e3d89aec52a8cb', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?width=640&crop=smart&auto=webp&s=b9dc544bc2b53c9634b6322ef6ef5b0018f8e039', 'width': 640}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?auto=webp&s=637046b5e04657fc85152992d7c4834c77a2f187', 'width': 945}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.