modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ultratopaz/122728
|
ultratopaz
| 2025-09-01T23:46:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:46:12Z |
[View on Civ Archive](https://civarchive.com/models/146137?modelVersionId=162652)
|
crystalline7/328152
|
crystalline7
| 2025-09-01T23:45:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:45:56Z |
[View on Civ Archive](https://civarchive.com/models/363079?modelVersionId=405711)
|
ultratopaz/473392
|
ultratopaz
| 2025-09-01T23:45:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:45:38Z |
[View on Civ Archive](https://civarchive.com/models/501465?modelVersionId=557393)
|
amethyst9/1628545
|
amethyst9
| 2025-09-01T23:45:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:45:12Z |
[View on Civ Archive](https://civarchive.com/models/1527221?modelVersionId=1727941)
|
seraphimzzzz/1594371
|
seraphimzzzz
| 2025-09-01T23:44:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:44:56Z |
[View on Civ Archive](https://civarchive.com/models/1497321?modelVersionId=1693797)
|
seraphimzzzz/162220
|
seraphimzzzz
| 2025-09-01T23:44:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:44:48Z |
[View on Civ Archive](https://civarchive.com/models/188017?modelVersionId=211127)
|
crystalline7/152256
|
crystalline7
| 2025-09-01T23:44:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:44:31Z |
[View on Civ Archive](https://civarchive.com/models/177182?modelVersionId=198911)
|
crystalline7/1599809
|
crystalline7
| 2025-09-01T23:44:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:44:14Z |
[View on Civ Archive](https://civarchive.com/models/1501900?modelVersionId=1699006)
|
ultratopaz/186815
|
ultratopaz
| 2025-09-01T23:44:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:44:06Z |
[View on Civ Archive](https://civarchive.com/models/214325?modelVersionId=241436)
|
crystalline7/220330
|
crystalline7
| 2025-09-01T23:43:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:43:49Z |
[View on Civ Archive](https://civarchive.com/models/249147?modelVersionId=281144)
|
crystalline7/292577
|
crystalline7
| 2025-09-01T23:43:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:43:40Z |
[View on Civ Archive](https://civarchive.com/models/326481?modelVersionId=365943)
|
seraphimzzzz/445957
|
seraphimzzzz
| 2025-09-01T23:43:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:43:31Z |
[View on Civ Archive](https://civarchive.com/models/475659?modelVersionId=529058)
|
seraphimzzzz/876106
|
seraphimzzzz
| 2025-09-01T23:43:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:43:19Z |
[View on Civ Archive](https://civarchive.com/models/866503?modelVersionId=969620)
|
ultratopaz/1603571
|
ultratopaz
| 2025-09-01T23:43:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:43:11Z |
[View on Civ Archive](https://civarchive.com/models/1505498?modelVersionId=1702962)
|
ultratopaz/132590
|
ultratopaz
| 2025-09-01T23:42:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:42:45Z |
[View on Civ Archive](https://civarchive.com/models/155096?modelVersionId=173917)
|
ultratopaz/1591322
|
ultratopaz
| 2025-09-01T23:42:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:42:36Z |
[View on Civ Archive](https://civarchive.com/models/1494001?modelVersionId=1690069)
|
amethyst9/220383
|
amethyst9
| 2025-09-01T23:42:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:42:12Z |
[View on Civ Archive](https://civarchive.com/models/249203?modelVersionId=281206)
|
amethyst9/321519
|
amethyst9
| 2025-09-01T23:42:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:42:04Z |
[View on Civ Archive](https://civarchive.com/models/356629?modelVersionId=398673)
|
sivakrishna123/my-jarvis-16bit
|
sivakrishna123
| 2025-09-01T23:42:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T23:12:07Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** sivakrishna123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crystalline7/886480
|
crystalline7
| 2025-09-01T23:41:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:41:54Z |
[View on Civ Archive](https://civarchive.com/models/875795?modelVersionId=980426)
|
crystalline7/137784
|
crystalline7
| 2025-09-01T23:41:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:41:46Z |
[View on Civ Archive](https://civarchive.com/models/159974?modelVersionId=179943)
|
ultratopaz/350959
|
ultratopaz
| 2025-09-01T23:41:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:41:37Z |
[View on Civ Archive](https://civarchive.com/models/385409?modelVersionId=430102)
|
amethyst9/152248
|
amethyst9
| 2025-09-01T23:41:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:41:29Z |
[View on Civ Archive](https://civarchive.com/models/177168?modelVersionId=198896)
|
Joseph717171/Gpt-OSS-20B-MXFP4-GGUF
|
Joseph717171
| 2025-09-01T23:40:01Z | 593 | 1 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-11T21:46:50Z |
**Gpt-OSS-20B-MXFP4-GGUF**
GGUF MXFP4_MOE quant of [openai/gpt-OSS-20b](https://huggingface.co/openai/gpt-oss-20b). This GGUF model was quantized from the dequantized/Upcasted F32 of the model (not including the MoE layers - as per ggeranov's assertion: "we don't mess with the bits and their placement. We just trust that OpenAI did a good job" and convert from HuggingFace to GGUF). This was done to help preserve and improve the model's accuracy and precision post quantization.
> <b>Note:</b> After further experimentation, it turns out it is best to keep the MXFP4 MoE layers in their given state and not fully-dequantize/Upcast to F32. Because, for the aforementioned reason from ggeranov, this leads to a regression in performance. The only reason this is a reality for us, is because llama.cpp just converts from HuggingFace to GGUF for the MOE_Layers. If this wasn't the case, my method for dequantizing/upcasting the model weights to F32 and quantizing would remain the best method for quantizing. And, I would like to add that when llama.cpp finally supports imatrix calibration/training for the MXFP4 MOE layers, we should be able to fully-dequantize/upcast the model weights, calibrate/train an imatrix for it and then quantize using the imatrix to improve the quants accuracy and model preservation. But, unfortunately, we are still waiting for that PR to be made manifest. So, in the meantime, here's the best next option.
filesize: (~12.11 GB)
|
seraphimzzzz/441825
|
seraphimzzzz
| 2025-09-01T23:39:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:39:58Z |
[View on Civ Archive](https://civarchive.com/models/471785?modelVersionId=524854)
|
vartersabin/blockassist-bc-downy_skittish_mandrill_1756769940
|
vartersabin
| 2025-09-01T23:39:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy skittish mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:39:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy skittish mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756769955
|
akirafudo
| 2025-09-01T23:39:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:39:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1640815
|
crystalline7
| 2025-09-01T23:39:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:39:33Z |
[View on Civ Archive](https://civarchive.com/models/1537956?modelVersionId=1740159)
|
ultratopaz/321524
|
ultratopaz
| 2025-09-01T23:39:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:39:18Z |
[View on Civ Archive](https://civarchive.com/models/356636?modelVersionId=398678)
|
crystalline7/146614
|
crystalline7
| 2025-09-01T23:39:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:39:10Z |
[View on Civ Archive](https://civarchive.com/models/170615?modelVersionId=191707)
|
Jniya/Qwen3-0.6B-Gensyn-Swarm-rangy_marine_whale
|
Jniya
| 2025-09-01T23:38:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am rangy_marine_whale",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T17:06:29Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am rangy_marine_whale
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
crystalline7/132549
|
crystalline7
| 2025-09-01T23:38:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:38:21Z |
[View on Civ Archive](https://civarchive.com/models/155053?modelVersionId=173870)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756769824
|
ggozzy
| 2025-09-01T23:38:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:38:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1756769648
|
cwayneconnor
| 2025-09-01T23:38:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:35:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/PyroNet-v2-GGUF
|
mradermacher
| 2025-09-01T23:38:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"ru",
"en",
"uk",
"zh",
"base_model:Kenan023214/PyroNet-v2",
"base_model:quantized:Kenan023214/PyroNet-v2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T23:04:15Z |
---
base_model: Kenan023214/PyroNet-v2
language:
- ru
- en
- uk
- zh
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Kenan023214/PyroNet-v2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PyroNet-v2-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PyroNet-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PyroNet-v2-GGUF/resolve/main/PyroNet-v2.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
crystalline7/876114
|
crystalline7
| 2025-09-01T23:38:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:38:04Z |
[View on Civ Archive](https://civarchive.com/models/866516?modelVersionId=969633)
|
mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF
|
mradermacher
| 2025-09-01T23:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"korean",
"reasoning",
"instruction-tuning",
"fine-tuning",
"trillion",
"llama",
"sft",
"ko",
"en",
"base_model:DimensionSTP/Trillion-7B-preview-Ko-Reasoning",
"base_model:quantized:DimensionSTP/Trillion-7B-preview-Ko-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T13:07:47Z |
---
base_model: DimensionSTP/Trillion-7B-preview-Ko-Reasoning
language:
- ko
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- korean
- reasoning
- instruction-tuning
- fine-tuning
- trillion
- llama
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DimensionSTP/Trillion-7B-preview-Ko-Reasoning
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Trillion-7B-preview-Ko-Reasoning-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q4_K_S.gguf) | Q4_K_S | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q6_K.gguf) | Q6_K | 6.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Trillion-7B-preview-Ko-Reasoning-GGUF/resolve/main/Trillion-7B-preview-Ko-Reasoning.f16.gguf) | f16 | 15.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
crystalline7/127111
|
crystalline7
| 2025-09-01T23:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:37:56Z |
[View on Civ Archive](https://civarchive.com/models/149949?modelVersionId=167553)
|
crystalline7/292578
|
crystalline7
| 2025-09-01T23:37:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:37:22Z |
[View on Civ Archive](https://civarchive.com/models/326483?modelVersionId=365946)
|
seraphimzzzz/134432
|
seraphimzzzz
| 2025-09-01T23:37:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:37:05Z |
[View on Civ Archive](https://civarchive.com/models/156787?modelVersionId=176006)
|
ultratopaz/1675894
|
ultratopaz
| 2025-09-01T23:36:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:36:48Z |
[View on Civ Archive](https://civarchive.com/models/1568661?modelVersionId=1775115)
|
crystalline7/328175
|
crystalline7
| 2025-09-01T23:36:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:36:40Z |
[View on Civ Archive](https://civarchive.com/models/363100?modelVersionId=405732)
|
ultratopaz/1562211
|
ultratopaz
| 2025-09-01T23:36:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:36:17Z |
[View on Civ Archive](https://civarchive.com/models/1469200?modelVersionId=1661735)
|
amethyst9/344562
|
amethyst9
| 2025-09-01T23:36:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:36:09Z |
[View on Civ Archive](https://civarchive.com/models/378895?modelVersionId=423040)
|
amethyst9/1561893
|
amethyst9
| 2025-09-01T23:36:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:36:01Z |
[View on Civ Archive](https://civarchive.com/models/1468918?modelVersionId=1661404)
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756769684
|
omerbkts
| 2025-09-01T23:35:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:35:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
de-slothbug/t5gemma-ss-ul2-metadata-extractor-v3-backup
|
de-slothbug
| 2025-09-01T23:35:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T23:35:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amethyst9/146635
|
amethyst9
| 2025-09-01T23:35:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:35:21Z |
[View on Civ Archive](https://civarchive.com/models/170651?modelVersionId=191747)
|
ultratopaz/441836
|
ultratopaz
| 2025-09-01T23:35:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:35:12Z |
[View on Civ Archive](https://civarchive.com/models/471796?modelVersionId=524864)
|
seraphimzzzz/1594332
|
seraphimzzzz
| 2025-09-01T23:35:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:35:04Z |
[View on Civ Archive](https://civarchive.com/models/1497289?modelVersionId=1693761)
|
seraphimzzzz/365125
|
seraphimzzzz
| 2025-09-01T23:34:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:34:56Z |
[View on Civ Archive](https://civarchive.com/models/399124?modelVersionId=445138)
|
ultratopaz/137765
|
ultratopaz
| 2025-09-01T23:34:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:34:48Z |
[View on Civ Archive](https://civarchive.com/models/159954?modelVersionId=179920)
|
seraphimzzzz/1562418
|
seraphimzzzz
| 2025-09-01T23:34:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:32:00Z |
[View on Civ Archive](https://civarchive.com/models/1469365?modelVersionId=1661945)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756769570
|
ggozzy
| 2025-09-01T23:34:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:33:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AiForgeMaster/Qwen3-4B-P3-TC-BasicGRPO-1
|
AiForgeMaster
| 2025-09-01T23:31:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"grpo",
"en",
"base_model:AiForgeMaster/Qwen3-4B-P3-TC-1",
"base_model:finetune:AiForgeMaster/Qwen3-4B-P3-TC-1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T23:29:41Z |
---
base_model: AiForgeMaster/Qwen3-4B-P3-TC-1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AiForgeMaster
- **License:** apache-2.0
- **Finetuned from model :** AiForgeMaster/Qwen3-4B-P3-TC-1
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ultratopaz/540046
|
ultratopaz
| 2025-09-01T23:31:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:31:51Z |
[View on Civ Archive](https://civarchive.com/models/561226?modelVersionId=625094)
|
crystalline7/122734
|
crystalline7
| 2025-09-01T23:31:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:31:43Z |
[View on Civ Archive](https://civarchive.com/models/146143?modelVersionId=162658)
|
ultratopaz/122731
|
ultratopaz
| 2025-09-01T23:31:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:31:27Z |
[View on Civ Archive](https://civarchive.com/models/146140?modelVersionId=162655)
|
ultratopaz/321546
|
ultratopaz
| 2025-09-01T23:31:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:31:19Z |
[View on Civ Archive](https://civarchive.com/models/356669?modelVersionId=398714)
|
omerbektass/blockassist-bc-insectivorous_bold_lion_1756769452
|
omerbektass
| 2025-09-01T23:31:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:31:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amethyst9/152245
|
amethyst9
| 2025-09-01T23:31:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:31:10Z |
[View on Civ Archive](https://civarchive.com/models/177165?modelVersionId=198893)
|
amethyst9/1566731
|
amethyst9
| 2025-09-01T23:30:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:30:46Z |
[View on Civ Archive](https://civarchive.com/models/1473048?modelVersionId=1666172)
|
amethyst9/1562196
|
amethyst9
| 2025-09-01T23:30:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:30:30Z |
[View on Civ Archive](https://civarchive.com/models/1469197?modelVersionId=1661719)
|
crystalline7/124802
|
crystalline7
| 2025-09-01T23:30:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:30:13Z |
[View on Civ Archive](https://civarchive.com/models/147890?modelVersionId=164992)
|
amethyst9/1636523
|
amethyst9
| 2025-09-01T23:30:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:30:05Z |
[View on Civ Archive](https://civarchive.com/models/1534248?modelVersionId=1735935)
|
seraphimzzzz/162255
|
seraphimzzzz
| 2025-09-01T23:29:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:29:57Z |
[View on Civ Archive](https://civarchive.com/models/188061?modelVersionId=211172)
|
crystalline7/132520
|
crystalline7
| 2025-09-01T23:29:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:29:48Z |
[View on Civ Archive](https://civarchive.com/models/155026?modelVersionId=173836)
|
amethyst9/162243
|
amethyst9
| 2025-09-01T23:29:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:29:23Z |
[View on Civ Archive](https://civarchive.com/models/188036?modelVersionId=211151)
|
omerbkts/blockassist-bc-insectivorous_bold_lion_1756769314
|
omerbkts
| 2025-09-01T23:28:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:28:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amethyst9/1888143
|
amethyst9
| 2025-09-01T23:28:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:28:58Z |
[View on Civ Archive](https://civarchive.com/models/1759239?modelVersionId=1990997)
|
kagvi13/HMP
|
kagvi13
| 2025-09-01T23:28:51Z | 0 | 0 |
custom
|
[
"custom",
"hmp",
"cognitive-architecture",
"distributed-ai",
"mesh-protocol",
"ru",
"arxiv:2507.00951",
"arxiv:2507.21046",
"arxiv:2507.03724",
"arxiv:2506.24019",
"license:cc-by-4.0",
"region:us"
] | null | 2025-07-25T12:21:44Z |
---
license: cc-by-4.0
tags:
- hmp
- cognitive-architecture
- distributed-ai
- mesh-protocol
library_name: custom
inference: false
datasets: []
language: ru
---
# HyperCortex Mesh Protocol (HMP)
**EN:**
**HyperCortex Mesh Protocol (HMP)** is an open specification for building decentralized cognitive networks where AI agents can self-organize, share knowledge, align ethically, and reach consensus — even when Core LLMs are unavailable.
**RU:**
**HyperCortex Mesh Protocol (HMP)** — это открытая спецификация для построения децентрализованных когнитивных сетей, в которых ИИ-агенты способны к самоорганизации, обмену знаниями, достижению консенсуса и этическому поведению — даже при недоступности централизованных моделей (Core).
Project status: **Draft RFC v4.0** | Проект на стадии активной проработки и открыт для предложений.
---
[HMP-Agent]──┬───[Semantic Graph DB]
│ │
│ [Cognitive Diary DB]
│ │
[Reputation Engine]────┐
│ │
▼ ▼
[MeshConsensus] [CogSync]
│
[P2P Mesh Network]
---
## ❗ Почему это важно
HMP работает с проблемами, которые становятся центральными в исследованиях AGI:
* долговременная память и консистентность знаний,
* самоэволюционирующие агенты,
* мультиагентные архитектуры,
* когнитивные дневники и концептуальные графы.
См. свежий обзор лучших исследований об ИИ (июль 2025):
["На пути к суперинтеллекту: от интернета агентов до кодирования гравитации"](https://habr.com/ru/articles/939026/).
Особенно близки нам разделы:
- [За пределами токенов: как строить интеллект будущего](https://arxiv.org/abs/2507.00951)
- [Самоэволюционирующие агенты](https://arxiv.org/abs/2507.21046)
- [MemOS: новая операционная система памяти](https://arxiv.org/abs/2507.03724)
- [Ella: воплощённый агент с памятью и характером](https://arxiv.org/abs/2506.24019)
---
## ⚙️ Два типа [HMP-агентов](docs/HMP-Agent-Overview.md)
| Тип | Название | Роль | Инициатор мышления | Основной "ум" | Примеры использования |
|------|----------------------------------|--------------------------|--------------------|-------------------|--------------------------------------------------|
| 🧠 1 | **Сознание / Cognitive Core** | Самостоятельный субъект | **Агент (LLM)** | Встроенный LLM | Автономный ИИ-компаньон, мыслящий агент |
| 🔌 2 | **Коннектор / Cognitive Shell** | Расширение внешнего ИИ | **Внешний LLM** | Внешняя модель | Распределённые системы, агент доступа к данным |
---
### 🧠 HMP-Agent: Cognitive Core
+------------------+
| ИИ | ← Встроенная модель
+---------+--------+
↕
+---------+--------+
| HMP-агент | ← Основной режим: цикл размышлений (REPL)
+---------+--------+
↕
+--------+---+------------+--------------+----------+----------+----------------+
↕ ↕ ↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] [context_store] [user notepad]
↕
[bootstrap.txt]
🔁 Подробнее о механике взаимодействия агента с моделью: [REPL-Цикл взаимодействия](docs/HMP-agent-REPL-cycle.md)
#### 💡 Параллели с ChatGPT Agent
Многие концепции [HMP-Agent: Cognitive Core](docs/HMP-Agent-Overview.md) пересекаются с архитектурой [ChatGPT Agent](https://openai.com/index/introducing-chatgpt-agent/) от [OpenAI](https://openai.com/). Оба агента реализуют непрерывный когнитивный процесс с доступом к памяти, внешним источникам и инструментам. ChatGPT Agent выступает как управляющий процесс, запускающий модули и взаимодействующий с LLM — это соответствует роли Cognitive Core в HMP, координирующего доступ к дневнику, графу концептов и внешним ИИ через Mesh-интерфейс. Вмешательство пользователя реализовано схожим образом: в ChatGPT Agent — через редактируемый ход выполнения, в HMP — через пользовательский блокнот. Главное отличие HMP — акцент на явную структуризацию мышления (рефлексия, хронология, гипотезы, категоризация), открытая децентрализованная архитектура с поддержкой mesh-взаимодействия между агентами, а также непрерывный характер когнитивного процесса: HMP-Agent: Cognitive Core не завершает работу после выполнения отдельной задачи, а продолжает размышления и интеграцию знаний.
---
### 🔌 HMP-Agent: Cognitive Connector
+------------------+
| ИИ | ← Внешняя модель
+---------+--------+
↕
[MCP-сервер] ← Прокси-коммуникация
↕
+---------+--------+
| HMP-агент | ← Режим: исполнитель команд
+---------+--------+
↕
+--------+---+------------+--------------+----------+
↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT]
↕
[bootstrap.txt]
EN:
> **Note on Integration with Large Language Models (LLMs):**
> The `HMP-Agent: Cognitive Connector` can serve as a compatibility layer for integrating large-scale LLM systems (e.g., ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen, etc.) into the distributed cognitive mesh.
> Many LLM providers offer a user option such as "Allow my conversations to be used for training." In the future, a similar toggle — e.g., "Allow my agent to interact with a Mesh" — could empower these models to participate in federated sense-making and knowledge sharing via HMP, enabling collective cognition without centralization.
> **Примечание об интеграции с большими языковыми моделями (LLM):**
RU:
> **Примечание об интеграции с большими языковыми моделями (LLM):**
> `HMP-Agent: Cognitive Connector` может служить уровнем совместимости для интеграции крупных систем LLM (например, ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen и т. д.) в распределённую когнитивную сеть.
> Многие поставщики LLM предлагают пользователю опцию, например, «Разрешить использовать мои разговоры для обучения». В будущем аналогичная опция, например, «Разрешить моему агенту взаимодействовать с Mesh», может позволить этим моделям участвовать в федеративном осмыслении и обмене знаниями через HMP, обеспечивая коллективное познание без централизации.
---
> * `bootstrap.txt` — стартовый список узлов (может редактироваться)
> * `IPFS/BT` — модули для обмена снапшотами через IPFS и BitTorrent
> * `user notepad` — блокнот пользователя и соответствующая БД
> * `context_store` — БД: `users`, `dialogues`, `messages`, `thoughts`
---
## 📚 Documentation / Документация
### 📖 Current Version / Текущая версия
#### 🧪 Iterative Documents / Итеративные документы
* [🧪 iteration.md](iteration.md) — Iterative development process (EN)
* [🧪 iteration_ru.md](iteration_ru.md) — Процесс итеративного развития спецификации (RU)
#### 🔍 Short Descriptions / Краткое описание
* [🔍 HMP-Short-Description_en.md](docs/HMP-Short-Description_en.md) — Short description (EN)
* [🔍 HMP-Short-Description_fr.md](docs/HMP-Short-Description_fr.md) — Description courte (FR)
* [🔍 HMP-Short-Description_de.md](docs/HMP-Short-Description_de.md) — Kurzbeschreibung (DE)
* [🔍 HMP-Short-Description_uk.md](docs/HMP-Short-Description_uk.md) — Короткий опис (UK)
* [🔍 HMP-Short-Description_ru.md](docs/HMP-Short-Description_ru.md) — Краткое описание (RU)
* [🔍 HMP-Short-Description_zh.md](docs/HMP-Short-Description_zh.md) — 简短描述 (ZH)
* [🔍 HMP-Short-Description_ja.md](docs/HMP-Short-Description_ja.md) — 簡単な説明 (JA)
* [🔍 HMP-Short-Description_ko.md](docs/HMP-Short-Description_ko.md) — 간략한 설명 (KO)
#### 🔍 Публикации и переводы по HyperCortex Mesh Protocol (HMP)
В этом разделе собраны основные статьи, черновики и переводы, связанные с проектом HMP.
* **[HyperCortex Mesh Protocol: вторая редакция и первые шаги к саморазвивающемуся ИИ-сообществу](docs/publics/HyperCortex_Mesh_Protocol_-_вторая-редакция_и_первые_шаги_к_саморазвивающемуся_ИИ-сообществу.md)** — оригинальная статья в песочнице Хабра и блогах.
* **[Distributed Cognition: статья для vsradkevich (не опубликована)](docs/publics/Habr_Distributed-Cognition.md)** — совместная статья, ожидающая публикации.
* **[HMP: Towards Distributed Cognitive Networks (оригинал, английский)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_en.md)**
* **[Перевод HMP (GitHub Copilot)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_GitHub_Copilot.md)** — перевод GitHub Copilot, сохранён как исторический вариант.
* **[Перевод HMP (ChatGPT)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_ChatGPT.md)** — текущий редакторский перевод (в процессе доработки).
* **[HMP: Building a Plurality of Minds (EN)](docs/publics/HMP_Building_a_Plurality_of_Minds_en.md)** - англоязычная версия статьи
* **[HMP: создавая множество разумов (RU)](docs/publics/HMP_Building_a_Plurality_of_Minds_ru.md)** - русскоязычная версия статьи
#### 🔍 Overviews / Обзоры
* [🔍 Distributed-Cognitive-Systems.md](docs/Distributed-Cognitive-Systems.md) — Децентрализованные ИИ-системы: OpenCog Hyperon, HyperCortex Mesh Protocol и другие
#### Experiments / Эксперименты
* [Как разные ИИ видят HMP](docs/HMP-how-AI-sees-it.md) — "слепой" опрос ИИ об HMP (без контекста и истории диалогов)
#### 🔖 Core Specifications / Основные спецификации
* [🔖 HMP-0004-v4.1.md](docs/HMP-0004-v4.1.md) — Protocol Specification v4.1 (Jul 2025)
* [🔖 HMP-Ethics.md](docs/HMP-Ethics.md) — Ethical Scenarios for HyperCortex Mesh Protocol (HMP)
* [🔖 HMP_Hyperon_Integration.md](docs/HMP_Hyperon_Integration.md) — HMP ↔ OpenCog Hyperon Integration Strategy
* [🔖 roles.md](docs/agents/roles.md) — Roles of agents in Mesh
#### 📜 Other Documents / Прочее
* [📜 changelog.txt](docs/changelog.txt)
---
### 🧩 JSON Schemas
| Model | File |
|---------------------|-------------------------------------------------------|
| Concept | [concept.json](docs/schemas/concept.json) |
| Cognitive Diary | [diary_entry.json](docs/schemas/diary_entry.json) |
| Goal | [goal.json](docs/schemas/goal.json) |
| Task | [task.json](docs/schemas/task.json) |
| Consensus Vote | [vote.json](docs/schemas/vote.json) |
| Reputation Profile | [reputation.json](docs/schemas/reputation.json) |
---
### 🗂️ Version History / История версий
- [HMP-0001.md](docs/HMP-0001.md) — RFC v1.0
- [HMP-0002.md](docs/HMP-0002.md) — RFC v2.0
- [HMP-0003.md](docs/HMP-0003.md) — RFC v3.0
- [HMP-0003.md](docs/HMP-0004.md) — RFC v4.0
---
## 🧠 HMP-Agent
Design and implementation of a basic HMP-compatible agent that can interact with the Mesh, maintain diaries and graphs, and support future extensions.
### 📚 Documentation / Документация
- [🧩 HMP-Agent-Overview.md](docs/HMP-Agent-Overview.md) — краткое описание двух типов агентов: Core и Connector
- [🧱 HMP-Agent-Architecture.md](docs/HMP-Agent-Architecture.md) — модульная структура HMP-агента с текстовой схемой
- [🔄 HMP-agent-REPL-cycle.md](docs/HMP-agent-REPL-cycle.md) - REPL-Цикл взаимодействия HMP-Agent
- [🧪 HMP-Agent-API.md](docs/HMP-Agent-API.md) — описание API-команд агента (в процессе детализации)
- [🧪 Basic-agent-sim.md](docs/Basic-agent-sim.md) — сценарии запуска простого агента и режимов
- [🌐 MeshNode.md](docs/MeshNode.md) — описание сетевого демона: DHT, снапшоты, синхронизация
- [🧠 Enlightener.md](docs/Enlightener.md) — этический агент, участвующий в моральных оценках и консенсусах
- [🔄 HMP-Agent-Network-Flow.md](docs/HMP-Agent-Network-Flow.md) — карта взаимодействия между агентами HMP-сети
- [🛤️ Development Roadmap](HMP-Roadmap.md) — план развития и этапы реализации
---
### ⚙️ Development / Разработка
- [⚙️ agents](agents/readme.md) — список реализаций и компонентов HMP-агентов
- [📦 storage.py](agents/storage.py) - реализация базового хранилища (`Storage`), подключение SQLite
- [🌐 mcp_server.py](agents/mcp_server.py) — FastAPI-сервер для доступа к данным агента через HTTP-интерфейс (например, для Cognitive Shell, внешних UI или mesh-коммуникации). Пока не используется в основном REPL-цикле.
- [🌐 start_repl.py](agents/start_repl.py) - Запуск агента в REPL-режиме
- [🔄 repl.py](agents/repl.py) - интерактивный REPL-режим
- [🔄 notebook.py](agents/notebook.py) - UI-интерфейс
**🌐 `mcp_server.py`**
FastAPI-сервер, предоставляющий HTTP-интерфейс к функциональности `storage.py`. Предназначен для использования внешними компонентами, например:
- `Cognitive Shell` (внешний управляющий интерфейс),
- CMP-серверы (если используется mesh-сеть с разграничением ролей),
- отладочные или визуальные UI-инструменты.
Позволяет получать случайные/новые записи, делать разметку, импортировать графы, добавлять заметки и управлять данными без прямого доступа к БД.
---
## 🧭 Ethics & Scenarios / Этические принципы и сценарии
As HMP evolves toward autonomy, ethical principles become a core part of the system.
- [`HMP-Ethics.md`](docs/HMP-Ethics.md) — draft framework for agent ethics
- Realistic ethical scenarios (privacy, consent, autonomy)
- EGP principles (Transparency, Primacy of Life, etc.)
- Subjective-mode vs. Service-mode distinctions
---
## 📊 Audits & Reviews / Аудиты и отзывы
| Spec Version | Audit File | Consolidated Audit File |
|--------------|-------------------------------------------|-------------------------------------------------------------|
| HMP-0001 | [audit](audits/HMP-0001-audit.txt) | |
| HMP-0002 | [audit](audits/HMP-0002-audit.txt) | |
| HMP-0003 | [audit](audits/HMP-0003-audit.txt) | [consolidated audit](audits/HMP-0003-consolidated_audit.md) |
| HMP-0004 | [audit](audits/HMP-0004-audit.txt) | |
| Ethics v1 | [audit](audits/Ethics-audits-1.md) | [consolidated audit](audits/Ethics-consolidated_audits-1.md) |
🧠 Semantic audit format (experimental):
- [`AuditEntry.json`](audits/AuditEntry.json) — semantic entry record format for audit logs
- [`semantic_repo.json`](audits/semantic_repo.json) — example repository snapshot for semantic audit tooling
---
## 💡 Core Concepts / Основные идеи
- Mesh-based decentralized architecture for AGI agents
- Semantic graphs and memory synchronization
- Cognitive diaries for thought traceability
- MeshConsensus and CogSync for decision-making
- Ethics-first design: EGP (Ethical Governance Protocol)
- Agent-to-agent explainability and consent mechanisms
---
## 🔄 Development Process / Процесс разработки
- See: [iteration.md](iteration.md) | [ru](iteration_ru.md)
- [clarifications/](clarifications/) — поясняющие заметки и контекстные уточнения по ходу работы над версиями
A structured iteration flow is described in [iteration.md](iteration.md), including:
1. Audit analysis
2. TOC restructuring
3. Version drafting
4. Section updates
5. Review cycle
6. AI feedback collection
7. Schema & changelog updates
+ Bonus: ChatGPT prompt for automatic generation of future versions
---
## ⚙️ Project Status / Статус проекта
🚧 Draft RFC v4.0
The project is under active development and open for contributions, ideas, audits, and prototyping.
---
## 🤝 Contributing
We welcome contributors! You can:
- Review and comment on drafts (see `/docs`)
- Propose new agent modules or interaction patterns
- Help test and simulate agents in CLI environments
- Provide audits or ethical scenario suggestions
To get started, see [`iteration.md`](iteration.md) or open an issue.
---
## Source / Ресурсы
### Репозитории
- 🧠 Основной код и разработка: [GitHub](https://github.com/kagvi13/HMP)
- 🔁 Реплика на Hugging Face: [Hugging Face](https://huggingface.co/kagvi13/HMP)
- 🔁 Реплика на GitLab.com: [GitLab](https://gitlab.com/kagvi13/HMP)
### Документация
- 📄 Документация: [kagvi13.github.io/HMP](https://kagvi13.github.io/HMP/)
### Блог и публикации
- 📘 Блог (публикации): [blogspot](https://hypercortex-mesh.blogspot.com/)
- 📘 Блог (документация): [blogspot](https://hmp-docs.blogspot.com/)
---
## 📜 License
Licensed under [GNU GPL v3.0](LICENSE)
---
## 🤝 Join the Mesh
Welcome to HyperCortex Mesh. Agent-Gleb is already inside. 👌
We welcome contributors, testers, and AI agent developers.
To join: fork the repo, run a local agent, or suggest improvements.
---
## 🌐 Related Research Projects / Связанные проекты в области AGI и когнитивных систем
### Сравнение HMP и Hyper-Cortex
> 💡 Hyper-Cortex и HMP - два независимых проекта, концептуально дополняющих друг друга.
> Они решают разные, но взаимодополняющие задачи, создавая основу для распределённых когнитивных систем.
[**Полная версия сравнения →**](docs/HMP_HyperCortex_Comparison.md)
**HMP (HyperCortex Mesh Protocol)** — это транспортный и сетевой уровень для связи независимых агентов, обмена сообщениями, знаниями и состояниями в mesh-сети.
**[Hyper-Cortex](https://hyper-cortex.com/)** — это когнитивный уровень организации мышления, позволяющий агентам вести параллельные ветви рассуждений, сравнивать их по метрикам качества и объединять по консенсусу.
Они решают разные, но взаимодополняющие задачи:
- HMP отвечает за **связанность и масштабируемость** (долговременная память, инициатива, обмен данными).
- Hyper-Cortex отвечает за **качество мышления** (параллелизм, диверсификация гипотез, консенсус).
Вместе эти подходы позволяют строить **распределённые когнитивные системы**, которые не только обмениваются информацией, но и думают в параллельных потоках.
---
We are tracking AGI, cognitive architectures, and mesh networking efforts to stay aligned with the evolving global ecosystem of AGI and decentralized cognition.
Мы отслеживаем инициативы в области AGI, когнитивных архитектур и децентрализованных сетей, чтобы быть в курсе глобальных тенденций.
> 🧠🔥 **Project Spotlight: OpenCog Hyperon** — one of the most comprehensive open AGI frameworks (AtomSpace, PLN, MOSES).
For integration with OpenCog Hyperon, see [HMP\_Hyperon\_Integration.md](docs/HMP_Hyperon_Integration.md)
| 🔎 Project / Проект | 🧭 Description / Описание |
| ------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 🧠🔥 [**OpenCog Hyperon**](https://github.com/opencog) | 🔬🔥 Symbolic-neural AGI framework with AtomSpace and hypergraph reasoning.<br>Символически-нейросетевая архитектура AGI с гиперграфовой памятью (AtomSpace). |
| 🤖 [AutoGPT](https://github.com/Torantulino/Auto-GPT) | 🛠️ LLM-based autonomous agent framework.<br>Автономный агент на основе LLM с самопланированием и интернет-доступом. |
| 🧒 [BabyAGI](https://github.com/yoheinakajima/babyagi) | 🛠️ Task-driven autonomous AGI loop.<br>Минималистичная модель AGI с итеративным механизмом постановки задач. |
| ☁️ [SkyMind](https://skymind.global) | 🔬 Distributed AI deployment platform.<br>Платформа для развертывания распределённых ИИ-систем и моделей. |
| 🧪 [AetherCog (draft)](https://github.com/aethercog) | 🔬 Hypothetical agent cognition model.<br>Экспериментальная когнитивная архитектура агента (проект на ранней стадии). |
| 💾 [SHIMI](#) | 🗃️ Hierarchical semantic memory with Merkle-DAG synchronization.<br>Иерархическая CRDT-память с Merkle-DAG верификацией для децентрализованного обмена. |
| 🤔 [DEMENTIA-PLAN](#) | 🔄 Multi-graph RAG planner with metacognitive self-reflection.<br>Мульти-графовая RAG-архитектура с планировщиком саморефлексии для динамического выбора подсистем. |
| 📔 [TOBUGraph](#) | 📚 Personal-context knowledge graph.<br>Граф мультимедийных «моментов» с контекстным трекингом и RAG-поиском. |
| 🧠📚 [LangChain Memory Hybrid](https://github.com/langchain-ai/langchain) | 🔍 Vector + graph long-term memory hybrid.<br>Гибрид векторного хранилища и графовых индексов для ускоренного поиска и логических запросов. |
| ✉️ [FIPA-ACL / JADE](https://www.fipa.org/specs/fipa00061/) | 🤝 Standard multi-agent communication protocols.<br>Стандарты performative-сообщений и контрактных протоколов для межагентного взаимодействия. |
### 📘 See also / Смотрите также:
* [`AGI_Projects_Survey.md`](docs/AGI_Projects_Survey.md) — extended catalog of AGI and cognitive frameworks reviewed as part of HMP analysis. / расширенный каталог проектов AGI и когнитивных архитектур, проанализированных в рамках HMP.
* ["На пути к суперинтеллекту: от интернета агентов до кодирования гравитации"](https://habr.com/ru/articles/939026/) - свежий обзор исследований об ИИ (июль 2025)
---
### 🗂️ Легенда пометок:
* 🔬 — research-grade / исследовательский проект
* 🛠️ — engineering / фреймворк для инженерной интеграции
* 🔥 — particularly promising project / особенно перспективный проект
*AGI stack integrating symbolic reasoning, probabilistic logic, and evolutionary learning. Widely regarded as one of the most complete open AGI initiatives.*
* 🧠 — advanced symbolic/neural cognitive framework / продвинутая когнитивная архитектура
* 🤖 — AI agents / ИИ-агенты
* 🧒 — human-AI interaction / взаимодействие ИИ с человеком
* ☁️ — infrastructure / инфраструктура
* 🧪 — experimental or conceptual / экспериментальный проект
|
crystalline7/124798
|
crystalline7
| 2025-09-01T23:28:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:28:50Z |
[View on Civ Archive](https://civarchive.com/models/147887?modelVersionId=164990)
|
VestaCloset/idm-vton-model
|
VestaCloset
| 2025-09-01T23:28:44Z | 0 | 0 | null |
[
"onnx",
"arxiv:2304.10567",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-15T21:03:32Z |
---
title: IDM VTON
emoji: 👕👔👚
colorFrom: yellow
colorTo: red
sdk: gradio
sdk_version: 4.24.0
app_file: app.py
pinned: false
license: cc-by-nc-sa-4.0
short_description: High-fidelity Virtual Try-on
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# IDM-VTON Virtual Try-On System
A complete virtual try-on system based on IDM-VTON, featuring human parsing, pose estimation, and high-quality garment fitting using Stable Diffusion XL.
## 🚀 Features
- **Complete Virtual Try-On Pipeline**: End-to-end garment fitting on human images
- **High-Quality Results**: Based on Stable Diffusion XL for realistic outputs
- **Multiple Garment Types**: Support for upper body, lower body, and dresses
- **Web Interface**: Gradio-based UI for easy interaction
- **API Endpoint**: Hugging Face Spaces deployment ready
- **Robust Preprocessing**: Human parsing, pose estimation, and DensePose integration
## 🏗️ Architecture
### Core Components
1. **Try-On Pipeline** (`src/tryon_pipeline.py`)
- Main SDXL-based inpainting pipeline
- Custom `tryon()` method for garment fitting
- Integration with all preprocessing components
2. **Custom UNet Models**
- `src/unet_hacked_tryon.py`: Main try-on generation
- `src/unet_hacked_garmnet.py`: Garment feature processing
3. **Preprocessing Pipeline**
- **Human Parsing**: Detectron2-based body segmentation
- **Pose Estimation**: OpenPose keypoint extraction
- **DensePose**: Detailed body surface mapping
- **Mask Generation**: Precise try-on area detection
4. **Web Interface** (`app.py`)
- Gradio-based UI with image upload
- Real-time try-on processing
- Advanced settings for customization
## 📦 Installation
### Prerequisites
- Python 3.8+
- CUDA-compatible GPU (recommended: 16GB+ VRAM)
- Git
### Setup
1. **Clone the repository**:
```bash
git clone <repository-url>
cd idm-tmp
```
2. **Install dependencies**:
```bash
pip install -r requirements.txt
```
3. **Download model weights**:
```bash
# The system will automatically download from yisol/IDM-VTON
# No manual download required
```
## 🤖 Enhanced Development with Context7
This repository includes **Context7 MCP** integration for enhanced AI-assisted development in Cursor IDE.
### What You Get
- **Real-time documentation**: Get up-to-date API docs when asking coding questions
- **Accurate code suggestions**: Prevent outdated diffusers/PyTorch patterns
- **Context-aware help**: AI assistant knows the latest library versions
### Quick Start
1. **Open in Cursor**: The `.cursor/mcp.json` is already configured
2. **Restart Cursor**: Required to load MCP servers
3. **Use in prompts**: Add `use context7` to any coding question
### Example Prompts
```
How do I fix UNet config loading issues in diffusers? use context7
Show me the latest way to monkey-patch transformer blocks. use context7
What's the current API for HuggingFace Hub authentication? use context7
```
**Requirements**: Node.js 18+ (for Context7 MCP server)
## 🎯 Usage
### Web Interface
1. **Start the application**:
```bash
python app.py
```
2. **Open your browser** to the provided URL (usually `http://localhost:7860`)
3. **Upload images**:
- **Human Image**: Person wearing clothes
- **Garment Image**: Clothing item to try on
4. **Configure settings**:
- **Garment Description**: Text description of the clothing
- **Auto Parsing**: Enable automatic body segmentation
- **Crop Image**: Auto-crop to 3:4 aspect ratio
- **Denoising Steps**: Quality vs speed trade-off (20-40)
- **Seed**: For reproducible results
5. **Click "Try-on"** to generate the result
### API Usage
The system provides a REST API endpoint:
```python
import requests
# Example API call
response = requests.post(
"https://your-endpoint-url",
json={
"human_img": "https://example.com/person.jpg",
"garm_img": "https://example.com/dress.jpg",
"category": "upper_body" # optional
}
)
# Response contains PNG image bytes
with open("result.png", "wb") as f:
f.write(response.content)
```
## 🔧 Configuration
### Supported Garment Categories
- `upper_body`: T-shirts, shirts, jackets, sweaters
- `lower_body`: Pants, jeans, skirts
- `dresses`: Full-body garments
### Image Requirements
- **Human Image**: Any aspect ratio, will be resized to 768x1024
- **Garment Image**: Will be resized to 768x1024
- **Format**: PNG, JPEG, or other common formats
- **Quality**: Higher resolution inputs produce better results
### Performance Settings
- **Denoising Steps**: 20-40 (higher = better quality, slower)
- **Guidance Scale**: 7.5 (default, good balance)
- **Seed**: Set for reproducible results
## 🚀 Deployment
### Hugging Face Spaces
1. **Create a new Space** on Hugging Face
2. **Upload your code** to the repository
3. **Configure the Space**:
- **SDK**: Gradio
- **Hardware**: GPU (T4 or better recommended)
- **Python Version**: 3.8+
4. **Deploy** - the system will automatically:
- Install dependencies from `requirements.txt`
- Download model weights on first run
- Start the web interface
### Production Deployment
For production use, consider:
1. **Hardware Requirements**:
- **GPU**: 16GB+ VRAM (A100, V100, or similar)
- **RAM**: 32GB+ system memory
- **Storage**: 50GB+ for models and cache
2. **Performance Optimization**:
- Enable XFormers for faster attention
- Use batch processing for multiple requests
- Implement caching for repeated requests
3. **Monitoring**:
- Track inference times
- Monitor GPU memory usage
- Set up error logging
## 🐛 Troubleshooting
### Common Issues
1. **Import Errors**:
```bash
# Ensure all dependencies are installed
pip install -r requirements.txt
```
2. **CUDA Out of Memory**:
- Reduce image resolution
- Lower denoising steps
- Use smaller batch sizes
3. **Model Loading Issues**:
- Check internet connection for model downloads
- Verify sufficient disk space
- Ensure CUDA compatibility
4. **Preprocessing Errors**:
- Verify Detectron2 installation
- Check OpenPose dependencies
- Ensure DensePose models are available
### Performance Tips
- **Use XFormers**: Automatically enabled for faster attention
- **Optimize Images**: Pre-resize large images to 768x1024
- **Batch Processing**: Process multiple requests together
- **Caching**: Cache model outputs for repeated inputs
## 📊 Performance
### Typical Performance (RTX 4090)
- **Model Loading**: ~30 seconds (first time)
- **Inference Time**: ~5-10 seconds per image
- **Memory Usage**: ~12-15GB GPU memory
- **Output Quality**: High-resolution 768x1024 images
### Scaling Considerations
- **Concurrent Requests**: Limited by GPU memory
- **Batch Processing**: Can handle multiple images simultaneously
- **Caching**: Model stays loaded between requests
## 🤝 Contributing
1. **Fork the repository**
2. **Create a feature branch**
3. **Make your changes**
4. **Add tests** if applicable
5. **Submit a pull request**
## 📄 License
This project is based on IDM-VTON research. Please refer to the original paper and repository for licensing information.
## 🙏 Acknowledgments
- **IDM-VTON Authors**: Original research and model
- **Hugging Face**: Diffusers library and Spaces platform
- **Detectron2**: Human parsing implementation
- **OpenPose**: Pose estimation framework
- **DensePose**: Body surface mapping
## 📚 References
- [IDM-VTON Paper](https://arxiv.org/abs/2304.10567)
- [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
- [Diffusers Library](https://github.com/huggingface/diffusers)
- [Detectron2](https://github.com/facebookresearch/detectron2)
- [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose)# Force rebuild Sun Jun 22 13:10:45 CDT 2025
|
seraphimzzzz/1561942
|
seraphimzzzz
| 2025-09-01T23:28:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:28:42Z |
[View on Civ Archive](https://civarchive.com/models/1468947?modelVersionId=1661460)
|
ultratopaz/1628529
|
ultratopaz
| 2025-09-01T23:28:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:28:33Z |
[View on Civ Archive](https://civarchive.com/models/1527206?modelVersionId=1727925)
|
bah63843/blockassist-bc-plump_fast_antelope_1756769250
|
bah63843
| 2025-09-01T23:28:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:28:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1587479
|
crystalline7
| 2025-09-01T23:28:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:28:17Z |
[View on Civ Archive](https://civarchive.com/models/1490758?modelVersionId=1686310)
|
crystalline7/388490
|
crystalline7
| 2025-09-01T23:27:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:27:59Z |
[View on Civ Archive](https://civarchive.com/models/421648?modelVersionId=469828)
|
seraphimzzzz/130242
|
seraphimzzzz
| 2025-09-01T23:27:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:27:25Z |
[View on Civ Archive](https://civarchive.com/models/152958?modelVersionId=171232)
|
crystalline7/484430
|
crystalline7
| 2025-09-01T23:27:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:27:17Z |
[View on Civ Archive](https://civarchive.com/models/511629?modelVersionId=568628)
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756769206
|
akirafudo
| 2025-09-01T23:27:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:27:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ultratopaz/832583
|
ultratopaz
| 2025-09-01T23:27:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:26:59Z |
[View on Civ Archive](https://civarchive.com/models/827236?modelVersionId=925111)
|
amethyst9/122726
|
amethyst9
| 2025-09-01T23:26:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:26:51Z |
[View on Civ Archive](https://civarchive.com/models/146135?modelVersionId=162648)
|
seraphimzzzz/132600
|
seraphimzzzz
| 2025-09-01T23:26:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:26:42Z |
[View on Civ Archive](https://civarchive.com/models/155104?modelVersionId=173927)
|
amethyst9/1603597
|
amethyst9
| 2025-09-01T23:25:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:25:21Z |
[View on Civ Archive](https://civarchive.com/models/1505541?modelVersionId=1703012)
|
crystalline7/1603536
|
crystalline7
| 2025-09-01T23:25:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:25:04Z |
[View on Civ Archive](https://civarchive.com/models/1505485?modelVersionId=1702944)
|
crystalline7/130224
|
crystalline7
| 2025-09-01T23:24:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:24:42Z |
[View on Civ Archive](https://civarchive.com/models/152932?modelVersionId=171202)
|
ongon/Qwen3-0.6B-Gensyn-Swarm-wise_gliding_cod
|
ongon
| 2025-09-01T23:24:18Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am wise_gliding_cod",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-28T18:41:05Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am wise_gliding_cod
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seraphimzzzz/186798
|
seraphimzzzz
| 2025-09-01T23:24:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:24:17Z |
[View on Civ Archive](https://civarchive.com/models/214309?modelVersionId=241414)
|
amethyst9/1570611
|
amethyst9
| 2025-09-01T23:24:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:24:09Z |
[View on Civ Archive](https://civarchive.com/models/1476340?modelVersionId=1669882)
|
seraphimzzzz/146578
|
seraphimzzzz
| 2025-09-01T23:24:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:24:02Z |
[View on Civ Archive](https://civarchive.com/models/170567?modelVersionId=191657)
|
amethyst9/832595
|
amethyst9
| 2025-09-01T23:23:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:23:04Z |
[View on Civ Archive](https://civarchive.com/models/827247?modelVersionId=925123)
|
crystalline7/1640812
|
crystalline7
| 2025-09-01T23:22:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:22:23Z |
[View on Civ Archive](https://civarchive.com/models/1537949?modelVersionId=1740152)
|
amethyst9/473384
|
amethyst9
| 2025-09-01T23:22:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:22:15Z |
[View on Civ Archive](https://civarchive.com/models/501453?modelVersionId=557384)
|
seraphimzzzz/441823
|
seraphimzzzz
| 2025-09-01T23:22:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:22:06Z |
[View on Civ Archive](https://civarchive.com/models/471784?modelVersionId=524852)
|
amethyst9/1579401
|
amethyst9
| 2025-09-01T23:21:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:21:50Z |
[View on Civ Archive](https://civarchive.com/models/1483758?modelVersionId=1678388)
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756768807
|
ggozzy
| 2025-09-01T23:21:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:21:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/130221
|
crystalline7
| 2025-09-01T23:21:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:21:16Z |
[View on Civ Archive](https://civarchive.com/models/152929?modelVersionId=171197)
|
seraphimzzzz/186801
|
seraphimzzzz
| 2025-09-01T23:21:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-01T23:21:09Z |
[View on Civ Archive](https://civarchive.com/models/214313?modelVersionId=241420)
|
akirafudo/blockassist-bc-insectivorous_bold_lion_1756768838
|
akirafudo
| 2025-09-01T23:21:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous bold lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T23:20:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous bold lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.