modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-02 06:30:45
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
533 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-02 06:30:39
card
stringlengths
11
1.01M
amethyst9/473388
amethyst9
2025-09-01T23:20:52Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:20:52Z
[View on Civ Archive](https://civarchive.com/models/501457?modelVersionId=557385)
ultratopaz/1867667
ultratopaz
2025-09-01T23:20:27Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:20:26Z
[View on Civ Archive](https://civarchive.com/models/1740880?modelVersionId=1970193)
ultratopaz/365148
ultratopaz
2025-09-01T23:20:10Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:20:10Z
[View on Civ Archive](https://civarchive.com/models/399146?modelVersionId=445157)
Muapi/futuristic-display-enhancer-flux
Muapi
2025-09-01T23:19:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:19:25Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Futuristic Display Enhancer FLUX ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: mad-dshbrd ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:826433@924212", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
seraphimzzzz/517587
seraphimzzzz
2025-09-01T23:19:29Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:19:29Z
[View on Civ Archive](https://civarchive.com/models/541984?modelVersionId=602602)
ultratopaz/328170
ultratopaz
2025-09-01T23:19:21Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:19:21Z
[View on Civ Archive](https://civarchive.com/models/363096?modelVersionId=405729)
crystalline7/137748
crystalline7
2025-09-01T23:19:13Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:19:12Z
[View on Civ Archive](https://civarchive.com/models/159930?modelVersionId=179887)
ultratopaz/137739
ultratopaz
2025-09-01T23:19:05Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:19:05Z
[View on Civ Archive](https://civarchive.com/models/159918?modelVersionId=179872)
crystalline7/292637
crystalline7
2025-09-01T23:18:48Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:18:48Z
[View on Civ Archive](https://civarchive.com/models/326544?modelVersionId=366009)
mradermacher/XLM-Prohori-v2-GGUF
mradermacher
2025-09-01T23:18:31Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:squadgoals404/XLM-Prohori-v2", "base_model:quantized:squadgoals404/XLM-Prohori-v2", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-09-01T23:07:14Z
--- base_model: squadgoals404/XLM-Prohori-v2 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/squadgoals404/XLM-Prohori-v2 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#XLM-Prohori-v2-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q5_K_S.gguf) | Q5_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q5_K_M.gguf) | Q5_K_M | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q6_K.gguf) | Q6_K | 0.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/XLM-Prohori-v2-GGUF/resolve/main/XLM-Prohori-v2.f16.gguf) | f16 | 0.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
casperbenya/blockassist-bc-meek_barky_macaw_1756768645
casperbenya
2025-09-01T23:18:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek barky macaw", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:18:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek barky macaw --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/1570531
seraphimzzzz
2025-09-01T23:18:24Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:18:23Z
[View on Civ Archive](https://civarchive.com/models/1476266?modelVersionId=1669799)
amethyst9/1562400
amethyst9
2025-09-01T23:18:15Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:18:15Z
[View on Civ Archive](https://civarchive.com/models/1469360?modelVersionId=1661927)
bah63843/blockassist-bc-plump_fast_antelope_1756768628
bah63843
2025-09-01T23:18:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:17:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
KRLabsOrg/tinylettuce-ettin-17m-en
KRLabsOrg
2025-09-01T23:17:33Z
9
1
transformers
[ "transformers", "safetensors", "modernbert", "token-classification", "token classification", "hallucination detection", "retrieval-augmented generation", "ettin", "lightweight", "en", "dataset:ragtruth", "dataset:KRLabsOrg/rag-bioasq-lettucedetect", "arxiv:2507.11412", "arxiv:2502.17125", "base_model:jhu-clsp/ettin-encoder-17m", "base_model:finetune:jhu-clsp/ettin-encoder-17m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-08-31T10:56:00Z
--- license: mit language: - en base_model: - jhu-clsp/ettin-encoder-17m pipeline_tag: token-classification tags: - token classification - hallucination detection - retrieval-augmented generation - transformers - ettin - lightweight datasets: - ragtruth - KRLabsOrg/rag-bioasq-lettucedetect library_name: transformers --- # TinyLettuce (Ettin-17M): Efficient Hallucination Detection <p align="center"> <img src="https://github.com/KRLabsOrg/LettuceDetect/blob/dev/assets/tinytinylettuce.png?raw=true" alt="TinyLettuce" width="400"/> </p> **Model Name:** tinylettuce-ettin-17m-en **Organization:** KRLabsOrg **Github:** https://github.com/KRLabsOrg/LettuceDetect **Ettin encoders:** https://arxiv.org/pdf/2507.11412 ## Overview TinyLettuce is a lightweight token‑classification model that flags unsupported spans in answers given context (span aggregation performed downstream). Built on the 17M Ettin encoder, it targets real‑time CPU inference and low‑cost domain fine‑tuning with synthetic data. This variant is trained synthetic data and on the RAGTruth dataset for hallucination detection, using the 17M Ettin encoder and a token‑classification head. Designed for CPU‑friendly inference and simple deployment. ## Model Details - Architecture: Ettin encoder (17M) + token‑classification head - Task: token classification (0 = supported, 1 = hallucinated) - Input format: [CLS] context [SEP] question [SEP] answer [SEP], up to 4096 tokens - Language: English; License: MIT ## Training Data - RAGTruth + our synthetic data generated with LettuceDetect, span‑level labels - ~20k training samples ## Training Procedure - Tokenizer: AutoTokenizer; DataCollatorForTokenClassification; label pad −100 - Max length: 8k; batch size: 16; epochs: 5 - Optimizer: AdamW (lr 1e‑5, weight_decay 0.01) - Hardware: Single A100 80GB ## Results (RAGTruth) This model is designed primarily for fine-tuning on smaller, domain-specific samples, rather than for general use (though it still performs notably on Ragtruth). | Model | Parameters | F1 (%) | |-------|------------|--------| | TinyLettuce-17M | 17M | 68.52 | | LettuceDetect-base (ModernBERT) | 150M | 76.07 | | LettuceDetect-large (ModernBERT) | 395M | 79.22 | | Llama-2-13B (RAGTruth FT) | 13B | 78.70 | ## Usage You can use the model with the **lettucedetect** library. First install **lettucedetect**: ```bash pip install lettucedetect ``` Then use it: ```python from lettucedetect.models.inference import HallucinationDetector # Load tiny but powerful model detector = HallucinationDetector( method="transformer", model_path="KRLabsOrg/tinylettuce-ettin-17m-en" ) # Detect hallucinations in medical context spans = detector.predict( context=[ "Ibuprofen is an NSAID that reduces inflammation and pain. The typical adult dose is 400-600mg every 6-8 hours, not exceeding 2400mg daily." ], question="What is the maximum daily dose of ibuprofen?", answer="The maximum daily dose of ibuprofen for adults is 3200mg.", output_format="spans", ) print(spans) # Output: [{"start": 51, "end": 57, "text": "3200mg"}] ``` ## Citing If you use the model or the tool, please cite the following paper: ```bibtex @misc{Kovacs:2025, title={LettuceDetect: A Hallucination Detection Framework for RAG Applications}, author={Ádám Kovács and Gábor Recski}, year={2025}, eprint={2502.17125}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2502.17125}, } ```
ultratopaz/132509
ultratopaz
2025-09-01T23:17:33Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:17:33Z
[View on Civ Archive](https://civarchive.com/models/155013?modelVersionId=173818)
Muapi/animelight-v2
Muapi
2025-09-01T23:17:18Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:17:06Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # AnimeLight-v2 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:649847@734168", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
seraphimzzzz/321442
seraphimzzzz
2025-09-01T23:17:09Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:17:09Z
[View on Civ Archive](https://civarchive.com/models/356546?modelVersionId=398582)
ReportAId/whisper-medium-it-finetuned-without-voxpopuli
ReportAId
2025-09-01T23:17:06Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-01T22:37:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
amethyst9/358157
amethyst9
2025-09-01T23:17:01Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:17:00Z
[View on Civ Archive](https://civarchive.com/models/392362?modelVersionId=437680)
Muapi/former-splendor
Muapi
2025-09-01T23:16:54Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:15:16Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Former Splendor ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: FS ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:997539@1946460", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
amethyst9/146641
amethyst9
2025-09-01T23:16:44Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:44Z
[View on Civ Archive](https://civarchive.com/models/170655?modelVersionId=191753)
seraphimzzzz/473378
seraphimzzzz
2025-09-01T23:16:36Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:36Z
[View on Civ Archive](https://civarchive.com/models/501448?modelVersionId=557377)
ultratopaz/872538
ultratopaz
2025-09-01T23:16:28Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:28Z
[View on Civ Archive](https://civarchive.com/models/863338?modelVersionId=965997)
amethyst9/350892
amethyst9
2025-09-01T23:16:20Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:19Z
[View on Civ Archive](https://civarchive.com/models/385353?modelVersionId=430040)
xinnn32/blockassist-bc-meek_winged_caterpillar_1756768448
xinnn32
2025-09-01T23:16:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:15:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
seraphimzzzz/1675883
seraphimzzzz
2025-09-01T23:16:12Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:11Z
[View on Civ Archive](https://civarchive.com/models/1568652?modelVersionId=1775105)
crystalline7/137781
crystalline7
2025-09-01T23:16:04Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:16:04Z
[View on Civ Archive](https://civarchive.com/models/159969?modelVersionId=179936)
seraphimzzzz/350940
seraphimzzzz
2025-09-01T23:15:56Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:15:55Z
[View on Civ Archive](https://civarchive.com/models/385385?modelVersionId=430079)
ultratopaz/134419
ultratopaz
2025-09-01T23:15:40Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:15:40Z
[View on Civ Archive](https://civarchive.com/models/156772?modelVersionId=175989)
seraphimzzzz/522737
seraphimzzzz
2025-09-01T23:15:32Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:15:32Z
[View on Civ Archive](https://civarchive.com/models/546320?modelVersionId=607627)
amethyst9/292634
amethyst9
2025-09-01T23:15:08Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:15:07Z
[View on Civ Archive](https://civarchive.com/models/326542?modelVersionId=366007)
amethyst9/146600
amethyst9
2025-09-01T23:15:00Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:15:00Z
[View on Civ Archive](https://civarchive.com/models/170597?modelVersionId=191686)
ultratopaz/350946
ultratopaz
2025-09-01T23:14:37Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:14:36Z
[View on Civ Archive](https://civarchive.com/models/385389?modelVersionId=430083)
crystalline7/152226
crystalline7
2025-09-01T23:14:29Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:14:28Z
[View on Civ Archive](https://civarchive.com/models/177146?modelVersionId=198871)
BeitTigreAI/tigre-asr-Wav2Vec2Bert
BeitTigreAI
2025-09-01T23:14:25Z
9
0
null
[ "safetensors", "wav2vec2-bert", "speech-to-text", "tigre", "ctc", "beam-search", "kenlm", "automatic-speech-recognition", "tig", "license:cc-by-sa-4.0", "region:us" ]
automatic-speech-recognition
2025-08-30T00:05:47Z
--- license: cc-by-sa-4.0 language: tig tags: - speech-to-text - wav2vec2-bert - tigre - ctc - beam-search - kenlm pipeline_tag: automatic-speech-recognition ---
amethyst9/476009
amethyst9
2025-09-01T23:14:12Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:14:12Z
[View on Civ Archive](https://civarchive.com/models/503829?modelVersionId=560045)
crystalline7/358165
crystalline7
2025-09-01T23:13:56Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:13:55Z
[View on Civ Archive](https://civarchive.com/models/392375?modelVersionId=437691)
Muapi/ethereal-alien-concept-flux-ethanar
Muapi
2025-09-01T23:12:57Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:12:49Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Ethereal Alien Concept FLUX @Ethanar ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:822233@919453", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756768299
ggozzy
2025-09-01T23:12:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:12:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ultratopaz/1867724
ultratopaz
2025-09-01T23:12:45Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:12:44Z
[View on Civ Archive](https://civarchive.com/models/1740938?modelVersionId=1970254)
ultratopaz/162272
ultratopaz
2025-09-01T23:12:29Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:12:29Z
[View on Civ Archive](https://civarchive.com/models/188077?modelVersionId=211191)
seraphimzzzz/122707
seraphimzzzz
2025-09-01T23:12:14Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:12:13Z
[View on Civ Archive](https://civarchive.com/models/146112?modelVersionId=162627)
seraphimzzzz/137776
seraphimzzzz
2025-09-01T23:11:58Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:11:57Z
[View on Civ Archive](https://civarchive.com/models/159966?modelVersionId=179931)
sivakrishna123/my-jarvis-adapters
sivakrishna123
2025-09-01T23:11:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-01T23:11:19Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sivakrishna123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
amethyst9/1570636
amethyst9
2025-09-01T23:11:42Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:11:42Z
[View on Civ Archive](https://civarchive.com/models/1476360?modelVersionId=1669907)
crystalline7/402045
crystalline7
2025-09-01T23:11:25Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:11:25Z
[View on Civ Archive](https://civarchive.com/models/434362?modelVersionId=483828)
seraphimzzzz/366873
seraphimzzzz
2025-09-01T23:10:53Z
0
0
null
[ "region:us" ]
null
2025-09-01T23:10:51Z
[View on Civ Archive](https://civarchive.com/models/400753?modelVersionId=446902)
omerbektass/blockassist-bc-insectivorous_bold_lion_1756768186
omerbektass
2025-09-01T23:10:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:10:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/neon-paint-flux-lora
Muapi
2025-09-01T23:10:03Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:09:50Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Neon Paint Flux Lora ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: nps paint luminous vector art ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:766400@877373", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
gortanmeat/blockassist-bc-sturdy_trotting_caribou_1756768108
gortanmeat
2025-09-01T23:09:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy trotting caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:09:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy trotting caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Derendering/InkSight-Small-p
Derendering
2025-09-01T23:08:42Z
109
32
tf-keras
[ "tf-keras", "en", "zh", "ja", "vi", "arxiv:2402.05804", "doi:10.57967/hf/3661", "license:apache-2.0", "region:us" ]
null
2024-10-30T20:06:57Z
--- license: apache-2.0 language: - en - zh - ja - vi --- # InkSight Small-p From [InkSight: Offline-to-Online Handwriting Conversion by Learning to Read and Write](https://github.com/google-research/inksight) <div style="display: flex; gap: 0.5rem; flex-wrap: wrap; margin-bottom: 1rem;"> <a href="https://research.google/blog/a-return-to-hand-written-notes-by-learning-to-read-write/"> <img src="https://img.shields.io/badge/Google_Research_Blog-333333?&logo=google&logoColor=white" alt="Google Research Blog"> </a> <a href="https://arxiv.org/abs/2402.05804"> <img src="https://img.shields.io/badge/Read_the_Paper-4CAF50?&logo=arxiv&logoColor=white" alt="Read the Paper"> </a> <a href="https://huggingface.co/spaces/Derendering/Model-Output-Playground"> <img src="https://img.shields.io/badge/Output_Playground-007acc?&logo=huggingface&logoColor=white" alt="Try Demo on Hugging Face"> </a> <a href="https://charlieleee.github.io/publication/inksight/"> <img src="https://img.shields.io/badge/🔗_Project_Page-FFA500?&logo=link&logoColor=white" alt="Project Page"> </a> <a href="https://huggingface.co/datasets/Derendering/InkSight-Derenderings"> <img src="https://img.shields.io/badge/Dataset-InkSight-40AF40?&logo=huggingface&logoColor=white" alt="Hugging Face Dataset"> </a> <a href="https://githubtocolab.com/google-research/inksight/blob/main/colab.ipynb"> <img src="https://img.shields.io/badge/Example_Colab-F9AB00?&logo=googlecolab&logoColor=white" alt="Example colab"> </a> </div> <figure> <img src="https://charlieleee.github.io/publication/inksight/inksight_animation_gif.gif" alt="InkSight word-level" style="width: 100%;"> <figcaption>The illustration on InkSight's word-level model outputs both text and digital ink through "Recognize and Derender" inference. </figcaption> </figure> <div style="font-size: 16px; margin-top: 20px;"> <strong style="color: red;">Notice:</strong> Please use TensorFlow and tensorflow-text between version 2.15.0 and 2.17.0. Versions later than 2.17.0 may lead to unexpected behavior. We are currently investigating these issues. </div> ## Example Usage ```python from huggingface_hub import from_pretrained_keras import tensorflow_text model = from_pretrained_keras("Derendering/InkSight-Small-p") cf = model.signatures['serving_default'] prompt = "Derender the ink." # "Recognize and derender." or "Derender the ink: <text>" input_text = tf.constant([prompt], dtype=tf.string) image_encoded = tf.reshape(tf.io.encode_jpeg(np.array(image)[:, :, :3]), (1, 1)) output = cf(**{'input_text': input_text, 'image/encoded': image_encoded}) ``` <span>For full usage, please refer to the notebook: </span> <a href="https://githubtocolab.com/google-research/inksight/blob/main/colab.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="display: inline; vertical-align: middle;"></a> ## Model and Training Summary <table style="width:100%; border-collapse: collapse; font-family: Arial, sans-serif;"> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Model Architecture</th> <td style="border: 1px solid #333; padding: 10px;">A multimodal sequence-to-sequence Transformer model with the mT5 encoder-decoder architecture. It takes text tokens and ViT dense image embeddings as inputs to an encoder and autoregressively predicts discrete text and ink tokens with a decoder.</td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Input(s)</th> <td style="border: 1px solid #333; padding: 10px;">A pair of image and text.</td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Output(s)</th> <td style="border: 1px solid #333; padding: 10px;">Generated digital ink and text.</td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Usage</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>Application:</strong> The model is for research prototype, and the public version is released and available for the public.<br> <strong>Known Caveats:</strong> None. </td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">System Type</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>System Description:</strong> This is a standalone model.<br> <strong>Upstream Dependencies:</strong> None.<br> <strong>Downstream Dependencies:</strong> None. </td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Implementation Frameworks</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>Hardware & Software:</strong> Hardware: TPU v5e.<br> Software: T5X , JAX/Flax, Flaxformer.<br> <strong>Compute Requirements:</strong> We train all of our models for 340k steps with batch size 512. With frozen ViT encoders, the training of Small-i takes ∼33h on 64 TPU v5e chips and the training of Large-i takes ∼105h on 64 TPU v5e chips. </td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Data Overview</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>Training Datasets:</strong> The ViT encoder of Small-p is pretrained on ImageNet-21k, mT5 encoder and decoder are initialized from scratch. The entire model is trained on the mixture of publicly available datasets described in next section. </td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Evaluation Results</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>Evaluation Methods:</strong> Human evaluation (reported in Section 4.5.1 of the paper) and automated evaluations (reported in Section 4.5.2 of the paper). </td> </tr> <tr> <th style="width: 30%; border: 1px solid #333; padding: 10px;">Model Usage & Limitations</th> <td style="border: 1px solid #333; padding: 10px;"> <strong>Sensitive Use:</strong> The model is capable of converting images to digital inks. This model should not be used for any of the privacy-intruding use cases, e.g., forging handwritings.<br> <strong>Known Limitations:</strong> Reported in Appendix I of the paper.<br> <strong>Ethical Considerations & Potential Societal Consequences:</strong> Reported in Sections 6.1 and 6.2 of the paper. </td> </tr> </table> ## Citation If you find our work useful for your research and applications, please cite using this BibTeX: ```bibtex @article{ mitrevski2025inksight, title={InkSight: Offline-to-Online Handwriting Conversion by Teaching Vision-Language Models to Read and Write}, author={Blagoj Mitrevski and Arina Rak and Julian Schnitzler and Chengkun Li and Andrii Maksai and Jesse Berent and Claudiu Cristian Musat}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2025}, url={https://openreview.net/forum?id=pSyUfV5BqA}, note={} } ```
moyixiao/Qwen3-0.6B-GRPO-f16-150
moyixiao
2025-09-01T23:05:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T23:04:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
omerbektass/blockassist-bc-insectivorous_bold_lion_1756767832
omerbektass
2025-09-01T23:04:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:04:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/general-grievous-star-wars-1.5-sdxl-flux
Muapi
2025-09-01T23:03:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T23:02:27Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # General Grievous - Star Wars (1.5 / SDXL / FLUX) ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: GeneralGrievous ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:201697@736567", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
bah63843/blockassist-bc-plump_fast_antelope_1756767685
bah63843
2025-09-01T23:02:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:02:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
moscowx21/blockassist-bc-extinct_bipedal_clam_1756767707
moscowx21
2025-09-01T23:02:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "extinct bipedal clam", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:02:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - extinct bipedal clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756767632
vendi11
2025-09-01T23:01:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:01:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756767537
ggozzy
2025-09-01T23:00:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T23:00:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gortanmeat/blockassist-bc-sturdy_trotting_caribou_1756767542
gortanmeat
2025-09-01T22:59:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy trotting caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:59:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy trotting caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-shaggy_melodic_cobra_1756767498
AnerYubo
2025-09-01T22:58:21Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy melodic cobra", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:58:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy melodic cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-insectivorous_bold_lion_1756767474
omerbektass
2025-09-01T22:58:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:58:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF
mradermacher
2025-09-01T22:57:00Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "lfm2", "trl", "en", "ml", "base_model:Praha-Labs/LFM-MALAYALAM-TTS-v0.1", "base_model:quantized:Praha-Labs/LFM-MALAYALAM-TTS-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-01T22:53:14Z
--- base_model: Praha-Labs/LFM-MALAYALAM-TTS-v0.1 language: - en - ml library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - lfm2 - trl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Praha-Labs/LFM-MALAYALAM-TTS-v0.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LFM-MALAYALAM-TTS-v0.1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q6_K.gguf) | Q6_K | 0.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LFM-MALAYALAM-TTS-v0.1-GGUF/resolve/main/LFM-MALAYALAM-TTS-v0.1.f16.gguf) | f16 | 0.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/FACT-1-GGUF
mradermacher
2025-09-01T22:57:00Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:joel-crasto/FACT-1", "base_model:quantized:joel-crasto/FACT-1", "endpoints_compatible", "region:us" ]
null
2025-09-01T22:55:18Z
--- base_model: joel-crasto/FACT-1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/joel-crasto/FACT-1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#FACT-1-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q5_K_S.gguf) | Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q5_K_M.gguf) | Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q6_K.gguf) | Q6_K | 0.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/FACT-1-GGUF/resolve/main/FACT-1.f16.gguf) | f16 | 0.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756767284
ggozzy
2025-09-01T22:56:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:55:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756767178
ypszn
2025-09-01T22:53:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:53:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1756767157
bah63843
2025-09-01T22:53:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:53:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-insectivorous_bold_lion_1756767133
omerbektass
2025-09-01T22:52:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:52:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1756767094
vendi11
2025-09-01T22:52:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:52:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gortanmeat/blockassist-bc-sturdy_trotting_caribou_1756767045
gortanmeat
2025-09-01T22:51:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy trotting caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:51:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy trotting caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756767034
klmdr22
2025-09-01T22:51:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:51:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
JaspervanLeuven/smol_T1_E50_pick_and_place_servo_29_08_2025
JaspervanLeuven
2025-09-01T22:48:12Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:JaspervanLeuven/T1_E50_pick_and_place_servo_29_08_2025", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-01T22:48:00Z
--- base_model: lerobot/smolvla_base datasets: JaspervanLeuven/T1_E50_pick_and_place_servo_29_08_2025 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - robotics - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
CHRISPI09/blockassist-bc-galloping_thick_tuna_1756766869
CHRISPI09
2025-09-01T22:48:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:48:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-insectivorous_bold_lion_1756766867
akirafudo
2025-09-01T22:48:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:48:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
javdrher/decide-decision-classifier
javdrher
2025-09-01T22:47:51Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-01T21:19:44Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: decide-decision-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # decide-decision-classifier This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5161 - Accuracy: 0.8295 - Precision: 0.7990 - Recall: 0.8295 - F1: 0.8137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 37 | 0.5644 | 0.7752 | 0.7577 | 0.7752 | 0.7628 | | No log | 2.0 | 74 | 0.5161 | 0.8295 | 0.7990 | 0.8295 | 0.8137 | ### Framework versions - Transformers 4.56.0 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756766774
ggozzy
2025-09-01T22:47:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:47:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1756765305
koloni
2025-09-01T22:47:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:47:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
csikasote/mms-1b-all-swagen-female-15hrs-52
csikasote
2025-09-01T22:47:03Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "swagen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-01T22:16:55Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - swagen - mms - generated_from_trainer metrics: - wer model-index: - name: mms-1b-all-swagen-female-15hrs-52 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-swagen-female-15hrs-52 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset. It achieves the following results on the evaluation set: - Loss: 0.2575 - Wer: 0.2260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 52 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 7.1807 | 0.1572 | 200 | 1.8716 | 0.9711 | | 1.6673 | 0.3145 | 400 | 0.2982 | 0.2073 | | 1.286 | 0.4717 | 600 | 0.2890 | 0.2100 | | 1.3054 | 0.6289 | 800 | 0.2896 | 0.2131 | | 1.2144 | 0.7862 | 1000 | 0.2886 | 0.2177 | | 1.1739 | 0.9434 | 1200 | 0.2815 | 0.2181 | | 1.1605 | 1.1006 | 1400 | 0.2796 | 0.2176 | | 1.0902 | 1.2579 | 1600 | 0.2798 | 0.2214 | | 1.1329 | 1.4151 | 1800 | 0.2760 | 0.2266 | | 1.0894 | 1.5723 | 2000 | 0.2626 | 0.2299 | | 1.0737 | 1.7296 | 2200 | 0.2607 | 0.2341 | | 1.0698 | 1.8868 | 2400 | 0.2576 | 0.2260 | | 1.0905 | 2.0440 | 2600 | 0.2542 | 0.2295 | | 1.0489 | 2.2013 | 2800 | 0.2555 | 0.2307 | | 1.0234 | 2.3585 | 3000 | 0.2594 | 0.2365 | | 1.0368 | 2.5157 | 3200 | 0.2551 | 0.2357 | | 1.0381 | 2.6730 | 3400 | 0.2543 | 0.2316 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1756765198
helmutsukocok
2025-09-01T22:44:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:44:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
emaanbilal/mistral_7b_legal_fft
emaanbilal
2025-09-01T22:42:55Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T15:39:13Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.1 library_name: transformers model_name: mistral_7b_legal_fft tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for mistral_7b_legal_fft This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="emaanbilal/mistral_7b_legal_fft", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/emaanbilal-aag/full-finetuning-medical/runs/07voavl9) This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.54.1 - Pytorch: 2.7.1+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
gortanmeat/blockassist-bc-sturdy_trotting_caribou_1756766506
gortanmeat
2025-09-01T22:42:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy trotting caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:42:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy trotting caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CHRISPI09/blockassist-bc-galloping_thick_tuna_1756766524
CHRISPI09
2025-09-01T22:42:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:42:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1756766436
klmdr22
2025-09-01T22:41:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:41:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sergSt/flux-lora
sergSt
2025-09-01T22:41:13Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "lora", "fluxart", "image-generation", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-09-01T18:02:29Z
--- tags: - stable-diffusion - lora - fluxart - image-generation license: creativeml-openrail-m base_model: FluxArt/stable-flux-v1.5 library_name: diffusers pipeline_tag: text-to-image --- # Flux LoRA v0.004 LoRA-модификация для модели [FluxArt/stable-flux-v1.5](https://huggingface.co/FluxArt/stable-flux-v1.5), предназначенная для генерации кинематографичных портретов в стиле киберпанк, неон-нуар и sci-fi. ## 🧠 Использование ```python from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("FluxArt/stable-flux-v1.5").to("cuda") pipe.load_lora_weights("sergSt/flux-lora", weight_name="flux-lora-000004.safetensors") pipe.fuse_lora() image = pipe("cinematic portrait of a cyberpunk samurai").images[0] image.save("output.png")
acidjp/blockassist-bc-pesty_extinct_prawn_1756763927
acidjp
2025-09-01T22:39:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:39:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CHRISPI09/blockassist-bc-galloping_thick_tuna_1756766321
CHRISPI09
2025-09-01T22:39:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:38:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ggozzy/blockassist-bc-stubby_yapping_mandrill_1756766266
ggozzy
2025-09-01T22:39:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby yapping mandrill", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:38:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby yapping mandrill --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thyYu2024/dnus_22
thyYu2024
2025-09-01T22:38:46Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-01T21:43:56Z
--- base_model: Qwen/Qwen2-VL-2B-Instruct library_name: transformers model_name: dnus_22 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for dnus_22 This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="thyYu2024/dnus_22", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.20.0 - Transformers: 4.55.2 - Pytorch: 2.6.0+cu118 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
liukevin666/blockassist-bc-yawning_striped_cassowary_1756766160
liukevin666
2025-09-01T22:37:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:37:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CHRISPI09/blockassist-bc-galloping_thick_tuna_1756766138
CHRISPI09
2025-09-01T22:36:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:35:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-insectivorous_bold_lion_1756766135
akirafudo
2025-09-01T22:35:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:35:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
boopmoor/blockassist-bc-winged_bipedal_quail_1756766093
boopmoor
2025-09-01T22:35:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "winged bipedal quail", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:34:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - winged bipedal quail --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
omerbektass/blockassist-bc-insectivorous_bold_lion_1756766027
omerbektass
2025-09-01T22:34:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:34:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1756765900
xinnn32
2025-09-01T22:33:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:33:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
csikasote/mms-1b-all-swagen-female-15hrs-62
csikasote
2025-09-01T22:32:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "swagen", "mms", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-01T22:09:15Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/mms-1b-all tags: - automatic-speech-recognition - swagen - mms - generated_from_trainer metrics: - wer model-index: - name: mms-1b-all-swagen-female-15hrs-62 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mms-1b-all-swagen-female-15hrs-62 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset. It achieves the following results on the evaluation set: - Loss: 0.2764 - Wer: 0.2177 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 62 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 7.8829 | 0.1572 | 200 | 3.9628 | 0.9998 | | 3.5591 | 0.3145 | 400 | 3.2718 | 1.0002 | | 3.1338 | 0.4717 | 600 | 2.9476 | 0.9992 | | 2.2249 | 0.6289 | 800 | 0.3619 | 0.2176 | | 1.4148 | 0.7862 | 1000 | 0.2835 | 0.2199 | | 1.2068 | 0.9434 | 1200 | 0.2764 | 0.2181 | | 1.1452 | 1.1006 | 1400 | 0.2731 | 0.2197 | | 1.1111 | 1.2579 | 1600 | 0.2741 | 0.2216 | | 1.1289 | 1.4151 | 1800 | 0.2698 | 0.2285 | | 1.1383 | 1.5723 | 2000 | 0.2735 | 0.2334 | | 1.0541 | 1.7296 | 2200 | 0.2727 | 0.2299 | | 1.0778 | 1.8868 | 2400 | 0.2713 | 0.2309 | | 1.0617 | 2.0440 | 2600 | 0.2720 | 0.2359 | ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.0
klmdr22/blockassist-bc-wild_loud_newt_1756765899
klmdr22
2025-09-01T22:32:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:32:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
boopmoor/blockassist-bc-reclusive_deadly_scorpion_1756765819
boopmoor
2025-09-01T22:30:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive deadly scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:30:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive deadly scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
akirafudo/blockassist-bc-insectivorous_bold_lion_1756765783
akirafudo
2025-09-01T22:30:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous bold lion", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:30:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous bold lion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ypszn/blockassist-bc-yapping_pawing_worm_1756765740
ypszn
2025-09-01T22:29:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping pawing worm", "arxiv:2504.07091", "region:us" ]
null
2025-09-01T22:29:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping pawing worm --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/flux-concept-vehicle-2d-rendering-lora-flux-lora
Muapi
2025-09-01T22:29:01Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-01T22:28:14Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Flux concept vehicle 2d rendering lora FLUX概念载具渲染lora ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: oue style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:651930@729346", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```