modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 12:34:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 568
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 12:33:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CodeIsAbstract/language_parser-Q8_0-GGUF
|
CodeIsAbstract
| 2025-08-23T10:07:11Z | 108 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:CodeIsAbstract/language_parser",
"base_model:quantized:CodeIsAbstract/language_parser",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:23:42Z |
---
base_model: CodeIsAbstract/language_parser
tags:
- llama-cpp
- gguf-my-repo
---
# CodeIsAbstract/language_parser-Q8_0-GGUF
This model was converted to GGUF format from [`CodeIsAbstract/language_parser`](https://huggingface.co/CodeIsAbstract/language_parser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/CodeIsAbstract/language_parser) for more details on the model.
USAGE:
```
user prompt:
<data>human natural language data</data>
<format type=xml/yaml/json>
field name: value_type
..
</format>
```
example:
```
<format type="yaml">
{submission_id: string, paper_title: string, review_round: number, submission_status: string, final_editor_decision_details: {decision_type: string, decision_date: date, editor_notes: string optional}, reviewers_feedback: [{reviewer_id: string, review_date: date, overall_score: number optional, contribution_summary: {novelty_assessment: string, rigor_assessment: string, clarity_assessment: string, ethical_concerns_raised: boolean}, specific_areas_for_improvement: [{section: string, issue_type: string, original_text_quote: string optional, suggested_change: string, severity: string}], confidential_comments: string optional}]}
</format>
<data>
**CONFIDENTIAL EDITORIAL REPORT**
**Subject:** Editorial Decision on Manuscript MS-2022-493-R2
**Title:** "Start Sit Put Prevent Room Return Law Pay Memory Than: Organized optimizing complexity"
This memo summarizes the outcome of review round 2. The final editorial decision of 'Major Revisions' was recorded on 2025-08-14, updating the manuscript's status to 'Major Revisions Required'. The handling editor's summary note states: "Opportunity hear can else course oil. Interest Democrat try. Figure evidence bad middle off call." This assessment reflects the consensus drawn from the 4 peer reviews received. Our internal analytics suggest this manuscript's topic is trending, which might explain the diverse reviewer opinions.
**Review Panel Feedback Synthesis:**
--------------------------------------
**Reviewer 1 (ID: RVR-T-996)** submitted their evaluation on 2024-11-17.
They provided an overall score of 4/10. Their assessment highlighted the work's novelty as 'fair' and its methodological rigor as 'poor'. Primary points for revision included:
- A 'suggestion' issue of type 'Grammar' was identified in the 'Discussion' section. The suggested action is to: "Best phone stuff accept place black describe white."
The comment seems to target text similar to '...Increase size public next put deal low a number similar....'.
- A 'suggestion' issue of type 'Unclear_Argument' was identified in the 'Methods' section. The suggested action is to: "Better partner treat decision cost around receive stock tree cup major born them character forget."
The comment seems to target text similar to '...Them eat middle hotel impact tree radio recognize....'.
- A 'minor' issue of type 'Methodological_Flaw' was identified in the 'Conclusion' section. The suggested action is to: "Scene source action to usually it majority radio chance page article where somebody."
*Confidential Note to Editor:* Be partner financial fill. Scene power head year gun TV decade.
**Reviewer 2 (ID: RVR-T-282)** submitted their evaluation on 2024-12-08.
They provided an overall score of 10/10. Their assessment highlighted the work's novelty as 'high' and its methodological rigor as 'good'. Primary points for revision included:
- A 'suggestion' issue of type 'Methodological_Flaw' was identified in the 'Abstract' section. The suggested action is to: "Though movement build will impact because nothing keep stop quality contain guess family teach conference."
The comment seems to target text similar to '...Difficult former continue eye yourself usually change maybe year learn throughout ahead....'.
- A 'critical' issue of type 'Formatting' was identified in the 'Overall' section. The suggested action is to: "Improve detail this no social method begin continue eye unit white eye position common discuss only read arm source hard oil feeling project bar opportunity series certain."
The comment seems to target text similar to '...Garden reveal ball surface growth power....'.
*Confidential Note to Editor:* Western though doctor. Speech soon explain whatever.
**Reviewer 3 (ID: RVR-D-397)** submitted their evaluation on 2025-06-18.
They provided an overall score of 6/10. Their assessment highlighted the work's novelty as 'poor' and its methodological rigor as 'medium'. Primary points for revision included:
- A 'minor' issue of type 'Literature_Gap' was identified in the 'Results' section. The suggested action is to: "Phone star happy capital series tax model analysis."
- A 'major' issue of type 'Formatting' was identified in the 'Methods' section. The suggested action is to: "Style half issue agency decision nor player risk man produce skin lead author particular nation old."
- A 'major' issue of type 'Methodological_Flaw' was identified in the 'Discussion' section. The suggested action is to: "Note not safe mention too ahead visit tax."
The comment seems to target text similar to '...Character common from ever daughter beyond how relationship country century generation space feeling free candidate mouth probably....'.
- A 'major' issue of type 'Scope' was identified in the 'Abstract' section. The suggested action is to: "Member who summer industry imagine network sure back tree movie play someone father season happen bar first ago defense."
**Reviewer 4 (ID: RVR-C-823)** submitted their evaluation on 2025-04-17.
Their assessment highlighted the work's novelty as 'good' and its methodological rigor as 'bad'. Primary points for revision included:
- A 'suggestion' issue of type 'Unclear_Argument' was identified in the 'Introduction' section. The suggested action is to: "Hour range increase line shoulder lead fast significant high human particularly."
The comment seems to target text similar to '...Performance receive radio beat dinner after next....'.
- A 'major' issue of type 'Unclear_Argument' was identified in the 'Abstract' section. The suggested action is to: "Allow notice season teach ground soldier indeed four majority day center ask show difficult may despite modern single apply phone."
The comment seems to target text similar to '...Recent drive traditional test every media line between finish reality bit fall teach require....'.
*Confidential Note to Editor:* Factor best share wife current. Consider movement first prepare technology.
**Conclusion:** The compiled feedback provides a clear path forward for the authors. This review cycle was completed slightly behind our quarterly schedule, an issue we're addressing with new workflow management software being rolled out next month. The decision letter is now ready for dispatch.
</data>
```
try response for yourself.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CodeIsAbstract/language_parser-Q8_0-GGUF --hf-file language_parser-q8_0.gguf -c 2048
```
|
bingchilling0096/blockassist-bc-sniffing_alert_stingray_1755943612
|
bingchilling0096
| 2025-08-23T10:07:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing alert stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T10:07:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing alert stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Umarafzal123/my-cool-model
|
Umarafzal123
| 2025-08-23T10:04:00Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-23T10:04:00Z |
---
license: apache-2.0
---
|
ianxkaranja/DirectEd-Curriculum-Bot-LoRA
|
ianxkaranja
| 2025-08-23T10:03:35Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/gemma-2b",
"lora",
"sft",
"trl",
"text-generation",
"en",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] |
text-generation
| 2025-08-23T09:04:09Z |
---
base_model: google/gemma-2b
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:google/gemma-2b
- lora
- sft
- trl
license: gemma
language:
- en
---
# Model Card for Model ID
This model is a fine-tuned version of google/gemma-2b. It has been trained using TRL.
## Model Details
### Model Description
This model is a fine-tuned language model designed for chatbot interactions. It was trained on a dataset of ~669 lines of curated text, including conversational prompts, responses, and domain-specific knowledge.
The goal of the model is to generate coherent, contextually relevant, and user-friendly responses for chatbot use cases.
Developed by: Ian Karanja
Finetuned from model: google/gemma-2b
Training data size: ~669 lines of text
Model type: Causal Language Model
Intended use: Chatbot interactions in learning assistant
- **Developed by:** Ian Karanja
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA Adapter for Causal Language Model (Gemma-2B base)
- **Language(s) (NLP):** English
- **License:** Google Gemma-2B
- **Finetuned from model [optional]:** https://huggingface.co/google/gemma-2b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/ianxkaranja/DirectEd-Curriculum-Bot-LoRA
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This LoRA adapter is intended to support educational chatbots for the DirectEd e-learning curriculum. It specializes in:
Web design & development
MERN stack (TypeScript + React + MongoDB + Node.js)
Service Design & Product Management basics
Generative AI & LLMOps (Prompt Engineering, RAG, LoRA fine-tuning)
[More Information Needed]
### Downstream Use [optional]
Can be integrated into tutoring platforms, e-learning assistants, or LangChain-powered educational bots.
[More Information Needed]
### Out-of-Scope Use
Not designed for:
General chit-chat outside of educational domains
Medical, legal, or sensitive advice
Toxic or harmful content generation
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
roeker/blockassist-bc-quick_wiry_owl_1755943133
|
roeker
| 2025-08-23T10:00:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:59:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755941331
|
indoempatnol
| 2025-08-23T09:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:58:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
samil24/whisper-medium-sorani-v1
|
samil24
| 2025-08-23T09:57:00Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-20T08:42:22Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-sorani-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-sorani-v1
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2531
- Wer: 18.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1250
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:-------:|
| 0.2868 | 0.3365 | 500 | 0.3089 | 43.5663 |
| 0.2449 | 0.6729 | 1000 | 0.2722 | 41.1446 |
| 0.2169 | 1.0094 | 1500 | 0.2523 | 37.0698 |
| 0.1657 | 1.3459 | 2000 | 0.2198 | 33.5799 |
| 0.1622 | 1.6824 | 2500 | 0.2001 | 30.5878 |
| 0.0903 | 2.0188 | 3000 | 0.1891 | 28.5673 |
| 0.0997 | 2.3553 | 3500 | 0.1959 | 29.2150 |
| 0.1011 | 2.6918 | 4000 | 0.1738 | 27.6682 |
| 0.0605 | 3.0283 | 4500 | 0.1892 | 27.4459 |
| 0.0538 | 3.3647 | 5000 | 0.1953 | 27.0495 |
| 0.0662 | 3.7012 | 5500 | 0.1816 | 25.2417 |
| 0.0337 | 4.0377 | 6000 | 0.1968 | 25.4060 |
| 0.0372 | 4.3742 | 6500 | 0.1978 | 24.5698 |
| 0.0335 | 4.7106 | 7000 | 0.1993 | 23.8012 |
| 0.0225 | 5.0471 | 7500 | 0.2147 | 24.2556 |
| 0.0305 | 5.3836 | 8000 | 0.2007 | 23.8592 |
| 0.0279 | 5.7201 | 8500 | 0.2105 | 24.2846 |
| 0.0156 | 6.0565 | 9000 | 0.2077 | 22.9988 |
| 0.0173 | 6.3930 | 9500 | 0.2177 | 23.0278 |
| 0.0167 | 6.7295 | 10000 | 0.2148 | 22.7523 |
| 0.0118 | 7.0659 | 10500 | 0.2232 | 22.7523 |
| 0.0132 | 7.4024 | 11000 | 0.2185 | 23.2502 |
| 0.0171 | 7.7389 | 11500 | 0.2167 | 23.2115 |
| 0.0096 | 8.0754 | 12000 | 0.2233 | 22.6363 |
| 0.0106 | 8.4118 | 12500 | 0.2167 | 21.8581 |
| 0.0116 | 8.7483 | 13000 | 0.2227 | 22.4188 |
| 0.0074 | 9.0848 | 13500 | 0.2265 | 21.6067 |
| 0.0085 | 9.4213 | 14000 | 0.2305 | 22.0998 |
| 0.0107 | 9.7577 | 14500 | 0.2409 | 21.9499 |
| 0.0065 | 10.0942 | 15000 | 0.2258 | 21.1959 |
| 0.0058 | 10.4307 | 15500 | 0.2295 | 21.5922 |
| 0.0044 | 10.7672 | 16000 | 0.2343 | 21.5052 |
| 0.0041 | 11.1036 | 16500 | 0.2345 | 21.3312 |
| 0.0055 | 11.4401 | 17000 | 0.2276 | 21.3844 |
| 0.0035 | 11.7766 | 17500 | 0.2366 | 20.9735 |
| 0.0026 | 12.1131 | 18000 | 0.2387 | 20.4853 |
| 0.0036 | 12.4495 | 18500 | 0.2277 | 20.6255 |
| 0.0018 | 12.7860 | 19000 | 0.2396 | 20.5191 |
| 0.0025 | 13.1225 | 19500 | 0.2292 | 20.3258 |
| 0.0017 | 13.4590 | 20000 | 0.2385 | 20.3113 |
| 0.0017 | 13.7954 | 20500 | 0.2388 | 20.2533 |
| 0.0009 | 14.1319 | 21000 | 0.2399 | 20.0454 |
| 0.0017 | 14.4684 | 21500 | 0.2424 | 19.8231 |
| 0.0016 | 14.8048 | 22000 | 0.2437 | 20.1373 |
| 0.0005 | 15.1413 | 22500 | 0.2417 | 19.9923 |
| 0.0019 | 15.4778 | 23000 | 0.2399 | 19.3010 |
| 0.0006 | 15.8143 | 23500 | 0.2449 | 19.1899 |
| 0.0003 | 16.1507 | 24000 | 0.2518 | 19.1850 |
| 0.0006 | 16.4872 | 24500 | 0.2555 | 19.4026 |
| 0.0009 | 16.8237 | 25000 | 0.2468 | 19.3010 |
| 0.0011 | 17.1602 | 25500 | 0.2461 | 19.2769 |
| 0.0004 | 17.4966 | 26000 | 0.2418 | 19.2624 |
| 0.0001 | 17.8331 | 26500 | 0.2525 | 19.1125 |
| 0.0001 | 18.1696 | 27000 | 0.2509 | 19.0594 |
| 0.0 | 18.5061 | 27500 | 0.2520 | 19.0690 |
| 0.0001 | 18.8425 | 28000 | 0.2516 | 19.0497 |
| 0.0 | 19.1790 | 28500 | 0.2521 | 19.0449 |
| 0.0 | 19.5155 | 29000 | 0.2526 | 18.9869 |
| 0.0 | 19.8520 | 29500 | 0.2531 | 18.9917 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755941884
|
Sayemahsjn
| 2025-08-23T09:56:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:56:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mimoha/ocr
|
mimoha
| 2025-08-23T09:55:48Z | 0 | 0 |
mistralai
|
[
"mistralai",
"pytorch",
"ocr",
"image-to-text",
"ar",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2025-08-23T09:33:14Z |
---
pipeline_tag: image-to-text
library_name: mistralai
license: apache-2.0
language: ar
---
# OCR Arabic Model
موديل OCR قادر على استخراج النصوص من الصور باللغة العربية.
## Usage
```python
from mistralai import Mistral, ImageURLChunk
client = Mistral(api_key="HF_TOKEN")
result = client.ocr.process(
document=ImageURLChunk(image_url="data:image/jpeg;base64,..."),
model="ocr"
)
print(result)
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755942689
|
esi777
| 2025-08-23T09:52:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:51:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bha456423/blockassist-bc-quiet_fishy_bison_1755942582
|
bha456423
| 2025-08-23T09:50:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quiet fishy bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:50:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quiet fishy bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kalvina1/blockassist-bc-scruffy_bellowing_snail_1755942500
|
Kalvina1
| 2025-08-23T09:48:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy bellowing snail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:48:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy bellowing snail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755942405
|
Elizavr
| 2025-08-23T09:47:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:47:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pinktulip888/qwenpenguingen3
|
pinktulip888
| 2025-08-23T09:47:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-23T09:47:04Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pinktulip888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
csukuangfj/android-onnxruntime-libs
|
csukuangfj
| 2025-08-23T09:39:36Z | 0 | 3 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-02-23T04:42:44Z |
---
license: apache-2.0
---
# Introduction
Libraries in this repository are intended for use in
https://github.com/k2-fsa/sherpa-onnx
They are downloaded from
https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android/1.14.0
```
wget https://repo1.maven.org/maven2/com/microsoft/onnxruntime/onnxruntime-android/1.14.0/onnxruntime-android-1.14.0.aar
mv onnxruntime-android-1.14.0.aar onnxruntime-android-1.14.0.zip
unzip onnxruntime-android-1.14.0.zip
cd onnxruntime-android-1.14.0
tree .
```
```
.
├── AndroidManifest.xml
├── R.txt
├── arm64-v8a
├── armeabi-v7a
├── classes.jar
├── headers
│ ├── cpu_provider_factory.h
│ ├── nnapi_provider_factory.h
│ ├── onnxruntime_c_api.h
│ ├── onnxruntime_cxx_api.h
│ └── onnxruntime_cxx_inline.h
├── jni
│ ├── arm64-v8a
│ │ ├── libonnxruntime.so
│ │ └── libonnxruntime4j_jni.so
│ ├── armeabi-v7a
│ │ ├── libonnxruntime.so
│ │ └── libonnxruntime4j_jni.so
│ ├── x86
│ │ ├── libonnxruntime.so
│ │ └── libonnxruntime4j_jni.so
│ └── x86_64
│ ├── libonnxruntime.so
│ └── libonnxruntime4j_jni.so
├── x86
└── x86_64
10 directories, 16 files
```
|
chainway9/blockassist-bc-untamed_quick_eel_1755940257
|
chainway9
| 2025-08-23T09:38:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:38:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755940096
|
unitova
| 2025-08-23T09:35:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:35:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755941577
|
Dejiat
| 2025-08-23T09:33:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:33:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
edimaosom1/blockassist-bc-padded_crested_gull_1755939841
|
edimaosom1
| 2025-08-23T09:32:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"padded crested gull",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:32:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- padded crested gull
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755941429
|
liukevin666
| 2025-08-23T09:32:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:31:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755939997
|
lisaozill03
| 2025-08-23T09:30:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:30:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Yingerrrrrr/blockassist-bc-gilded_tiny_barracuda_1755941141
|
Yingerrrrrr
| 2025-08-23T09:26:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded tiny barracuda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:26:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded tiny barracuda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755941036
|
kapalbalap
| 2025-08-23T09:24:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:24:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755939269
|
ihsanridzi
| 2025-08-23T09:21:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:20:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bimabk/e4d4be7b-0233-4468-a9f1-3b76f72bf91f
|
bimabk
| 2025-08-23T09:18:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"region:us"
] |
text-generation
| 2025-08-23T09:18:47Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Qwen2.5-3B-Instruct
- grpo
- lora
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
pasithbas159/Typhoon2_HII_satellite_v3.1
|
pasithbas159
| 2025-08-23T09:09:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-23T09:09:01Z |
---
base_model: pasithbas/typhoon2-qwen2vl-7b-vision-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** pasithbas159
- **License:** apache-2.0
- **Finetuned from model :** pasithbas/typhoon2-qwen2vl-7b-vision-instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755939993
|
liukevin666
| 2025-08-23T09:08:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:07:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cryptoggg/blockassist-bc-deft_bold_cheetah_1755939919
|
cryptoggg
| 2025-08-23T09:06:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft bold cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:06:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft bold cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asdwefv/NEW.18.Freddy-Mireles-video-twitter-Que-paso-con-su-amigo-Julio-Cesar
|
asdwefv
| 2025-08-23T09:05:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-23T09:04:29Z |
<a href="https://allyoutubers.com/Freddy-Mireles-video-twitter"> 🌐 NEW.18.Freddy-Mireles-video-twitter-Que-paso-con-su-amigo-Julio-Cesar
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/Freddy-Mireles-video-twitter"> 🌐 NEW.18.Freddy-Mireles-video-twitter-Que-paso-con-su-amigo-Julio-Cesar
<a href="https://allyoutubers.com/Freddy-Mireles-video-twitter"> 🌐 NEW.18.Freddy-Mireles-video-twitter-Que-paso-con-su-amigo-Julio-Cesar
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/Freddy-Mireles-video-twitter"> 🌐 NEW.18.Freddy-Mireles-video-twitter-Que-paso-con-su-amigo-Julio-Cesar
|
Zahranaveed019/medical_llama_lora
|
Zahranaveed019
| 2025-08-23T09:04:15Z | 14 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3-8b-instruct-bnb-4bit",
"lora",
"transformers",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-21T17:03:15Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/llama-3-8b-instruct-bnb-4bit
- lora
- transformers
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
okuzarabasi/blockassist-bc-dormant_opaque_moose_1755939730
|
okuzarabasi
| 2025-08-23T09:02:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant opaque moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T09:02:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant opaque moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755938099
|
sampingkaca72
| 2025-08-23T08:59:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:59:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
godijef/blockassist-bc-peaceful_singing_panther_1755939481
|
godijef
| 2025-08-23T08:59:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful singing panther",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:58:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful singing panther
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755938690
|
roeker
| 2025-08-23T08:46:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:45:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
St3pe3n/blockassist-bc-sniffing_sleek_macaque_1755938347
|
St3pe3n
| 2025-08-23T08:39:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing sleek macaque",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:39:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing sleek macaque
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TrendingNews/New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial
|
TrendingNews
| 2025-08-23T08:38:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-23T08:37:43Z |
Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/rrtgrtrt"> 🌐 Click Here To link (uppal-farm-girl-original-viral-video-links. /. New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial.)
🔴 ➤►DOWNLOAD👉👉🟢 ➤Watch 🟢 ➤ ➤ ➤ <a href="https://newvidgallery.com/rrtgrtrt"> 🌐 uppal-farm-girl-original-viral-video-links. /. New.full.videos.uppal.farm.girl.Viral.Video.Official.Tutorial.
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755938234
|
Dejiat
| 2025-08-23T08:37:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:37:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755938074
|
Dejiat
| 2025-08-23T08:35:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:35:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alikhalilit98/Cattle-Body-Parts-Dataset-for-Object-Detection
|
alikhalilit98
| 2025-08-23T08:33:53Z | 0 | 0 | null |
[
"object-detection",
"dataset",
"YOLO",
"cattle",
"agriculture",
"en",
"dataset:cattle-body-parts",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
object-detection
| 2025-01-31T07:21:43Z |
---
language: en
tags:
- object-detection
- dataset
- YOLO
- cattle
- agriculture
license: cc-by-4.0
datasets:
- cattle-body-parts
model-index:
- name: YOLOv7X Cattle Body Parts Detection
results:
- task:
type: object-detection
dataset:
name: Cattle Body Parts Dataset
type: custom
metrics:
- type: mAP
value: 0.996
---
# Cattle Body Parts Image Dataset for Object Detection
<div style="display: flex; gap: 10px; flex-wrap: wrap;">
<img src="https://img.shields.io/github/license/AliKHaliliT/Cattle-Body-Parts-Dataset-for-Object-Detection" alt="License">
<img src="https://img.shields.io/github/last-commit/AliKHaliliT/Cattle-Body-Parts-Dataset-for-Object-Detection" alt="Last Commit">
<img src="https://img.shields.io/github/issues/AliKHaliliT/Cattle-Body-Parts-Dataset-for-Object-Detection" alt="Open Issues">
</div>
<br/>
## Intro
This dataset is a curated collection of images featuring various cattle body parts aimed at facilitating object detection tasks. The dataset contains a total of 428 high-quality photos, meticulously annotated with three distinct classes: "Back," "Head," and "Leg."
The dataset can be downloaded using [this link](https://www.kaggle.com/datasets/alikhalilit98/cattle-body-parts-dataset-for-object-detection). The dataset is also available at Roboflow Universe.
<p align="center">
<a href="https://universe.roboflow.com/ali-khalili/cattle-body-parts-dataset-for-object-detection">
<img src="https://app.roboflow.com/images/download-dataset-badge.svg"></img>
</a>
</p>
A YOLOv7X model has been trained using the dataset and achieved a mAP of 99.6%. You can access the trained weights through [this link](https://huggingface.co/alikhalilit98/Cattle-Body-Parts-Dataset-for-Object-Detection/blob/main/yolov7_cattle_parts_final.pt).
<!--
### Acquisition
The dataset creation involved the following steps:
- **Initial Data:** Images were collected and annotated to create a base dataset for training.
- **Model Training:** A [YOLOv7](https://github.com/WongKinYiu/yolov7) model was trained to recognize target objects in the annotated images.
- **Data Acquisition Script:** An automated script fetched videos from the internet.
- **Conversion and Filtering:** Videos were turned into frames; similar frames were filtered out using Cosine Similarity.
- **Object Detection:** The trained model identified objects in the new images.
- **Quality Check:** A comprehensive review ensured dataset accuracy and consistency.
-->
## Motivation
Accurate and reliable identification of different cattle body parts is crucial for various agricultural and veterinary applications. This dataset aims to provide a valuable resource for researchers, developers, and enthusiasts working on object detection tasks involving cattle, ultimately contributing to advancements in livestock management, health monitoring, and related fields.
## Data
### Overview
- Total Images: 428
- Classes: Back, Head, Leg
- Annotations: Bounding boxes for each class
Below is an example image from the dataset.
<div align="center">
<img src="https://github.com/AliKHaliliT/Cattle-Body-Parts-Dataset-for-Object-Detection/blob/main/util_resources/readme/sample.png?raw=true"/>
</div>
### Contents
```
📦 Cattle_Body_Parts_OD.zip
┣ 📂 images
┃ ┣ 📜 image1.jpg
┃ ┣ 📜 image2.jpg
┃ ┗ ...
┗ 📂 annotations
┣ 📜 image1.json
┣ 📜 image2.json
┗ ...
```
### Annotation Format
Each annotation file corresponds to an image in the dataset and is formatted as per the [LabelMe](https://github.com/wkentaro/labelme) [JSON](https://www.json.org/json-en.html) standard. These annotations define the bounding box coordinates for each labeled body part, enabling straightforward integration into object detection pipelines.
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## Disclaimer
This dataset has been collected from publicly available sources. I do not claim ownership of the data and have no intention of infringing on any copyright. The material contained in this dataset is copyrighted to their respective owners. I have made every effort to ensure the data is accurate and complete, but I cannot guarantee its accuracy or completeness. If you believe any data in this dataset infringes on your copyright, please get in touch with me immediately so I can take appropriate action.
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755936293
|
lisaozill03
| 2025-08-23T08:29:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:29:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
good3456/blockassist-bc-giant_tawny_ostrich_1755937440
|
good3456
| 2025-08-23T08:24:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"giant tawny ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:24:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- giant tawny ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zuruyu/blockassist-bc-endangered_pesty_chinchilla_1755937300
|
zuruyu
| 2025-08-23T08:23:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered pesty chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:22:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered pesty chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nema122/blockassist-bc-robust_fluffy_ram_1755937335
|
nema122
| 2025-08-23T08:23:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:23:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
awilliam60412/0823-Llama-3-2-1B-Instruct
|
awilliam60412
| 2025-08-23T08:22:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-23T08:22:15Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** awilliam60412
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chainway9/blockassist-bc-untamed_quick_eel_1755935517
|
chainway9
| 2025-08-23T08:19:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:19:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755936887
|
liukevin666
| 2025-08-23T08:16:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:16:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
barayah/blockassist-bc-skittish_fleecy_opossum_1755936615
|
barayah
| 2025-08-23T08:10:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skittish fleecy opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:10:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skittish fleecy opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1755936330
|
esi777
| 2025-08-23T08:06:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:05:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755936189
|
kapalbalap
| 2025-08-23T08:03:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T08:03:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755934431
|
lisaozill03
| 2025-08-23T07:59:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:59:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v62_merged_e3
|
tamewild
| 2025-08-23T07:58:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T07:56:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kofhi/blockassist-bc-large_barky_cobra_1755935740
|
kofhi
| 2025-08-23T07:56:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"large barky cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:56:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- large barky cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755935639
|
kapalbalap
| 2025-08-23T07:55:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:54:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755935673
|
llencia
| 2025-08-23T07:54:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:54:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755935626
|
IvanJAjebu
| 2025-08-23T07:54:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:54:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IbrahimAlAzhar/FutureGen_v2_dataset
|
IbrahimAlAzhar
| 2025-08-23T07:52:24Z | 0 | 0 | null |
[
"scientific-articles",
"future-work",
"NLP",
"ACL",
"NeurIPS",
"LLM-evaluation",
"en",
"license:cc-by-4.0",
"region:us"
] | null | 2025-08-23T07:38:34Z |
---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- found
languages:
- en
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- text-classification
task_ids:
- future-work-generation
- scientific-section-classification
pretty_name: ACL Future Work Dataset (2023–2024)
tags:
- scientific-articles
- future-work
- NLP
- ACL
- NeurIPS
- LLM-evaluation
language:
- en
---
# 🧠 ACL Future Work Dataset (2023–2024)
This dataset consists of structured scientific paper data from ACL 2023 and ACL 2024 proceedings. Each paper is parsed into sections (e.g., Introduction, Related Work, Conclusion), and a **"Future Work"** section is automatically or manually extracted from the parsed text by searching for relevant future-oriented sentences in reverse section order.
## 📁 Dataset Structure
Each JSON file (`acl23_future_cleaned_final.json` and `acl24_future_cleaned_final.json`) has the following format:
```json
{
"ACL23_1.pdf": {
"abstractText": "Abstract of the paper...",
"sections": [
{
"heading": "1 Introduction",
"text": "..."
},
...
{
"heading": "Future Work",
"text": "We plan to extend this method by..."
}
],
"title": "Paper Title",
"year": 2023
},
...
}
## 📜 License
This dataset is licensed under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
You are free to use, share, and adapt the dataset as long as you give appropriate credit.
### ✍️ Curated by
Ibrahim Al Azher, Northern Illinois University, DATALab
|
roeker/blockassist-bc-quick_wiry_owl_1755935327
|
roeker
| 2025-08-23T07:49:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:49:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755933764
|
thanobidex
| 2025-08-23T07:48:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:48:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
prashantrb111/autotrain-hu502-dbid3
|
prashantrb111
| 2025-08-23T07:45:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T07:32:08Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
base_model: distilbert/distilgpt2
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
lautan/blockassist-bc-gentle_patterned_goat_1755933454
|
lautan
| 2025-08-23T07:44:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:44:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jharnag/blockassist-bc-furry_hulking_sloth_1755935001
|
jharnag
| 2025-08-23T07:43:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry hulking sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:43:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry hulking sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755933367
|
chainway9
| 2025-08-23T07:43:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:43:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlfoundations-cua-dev/qwen2_5vl_7b_easyr1_waveui_only_4k9
|
mlfoundations-cua-dev
| 2025-08-23T07:43:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-23T07:39:45Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2_5vl_7b_easyr1_waveui_only_4k9_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl_7b_easyr1_waveui_only_4k9_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the easyr1-waveui-only-4k9-omniparser-qwen-tool-call-4MP dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yoppertiu/blockassist-bc-stubby_dormant_stingray_1755934970
|
yoppertiu
| 2025-08-23T07:43:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby dormant stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:42:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby dormant stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FreedomIntelligence/ShizhenGPT-32B-VL
|
FreedomIntelligence
| 2025-08-23T07:41:40Z | 5 | 1 | null |
[
"safetensors",
"Traditional Chinese Medicin",
"Multimodal LLM",
"multimodal",
"image-text-to-text",
"zh",
"dataset:FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT",
"dataset:FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT",
"arxiv:2508.14706",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-21T08:09:28Z |
---
license: apache-2.0
datasets:
- FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT
- FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT
language:
- zh
base_model:
- Qwen/Qwen2.5-32B
pipeline_tag: image-text-to-text
tags:
- Traditional Chinese Medicin
- Multimodal LLM
- multimodal
---
<div align="center">
<h1>
ShizhenGPT-32B-VL
</h1>
</div>
<div align="center">
<a href="https://github.com/FreedomIntelligence/ShizhenGPT" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2508.14706" target="_blank">Paper</a>
</div>
**ShizhenGPT** is the first multimodal LLM for Traditional Chinese Medicine (TCM).
It not only possesses strong expertise in TCM, but also supports TCM multimodal diagnostic capabilities, which involve looking (望), listening/smelling (闻), questioning (问), and pulse-taking (切).
👉 More details on GitHub: [ShizhenGPT](https://github.com/FreedomIntelligence/ShizhenGPT)
# <span>Model Info</span>
> **ShizhenGPT-32B-VL** is a variant derived from ShizhenGPT-32B-Omni that includes only the LLM and vision encoder. It is recommended if your use case involves text or vision tasks exclusively. For broader multimodal needs, please select one of the versions below.
| | Parameters | Supported Modalities | Link |
| ---------------------- | ---------- | ----------------------------- | --------------------------------------------------------------------- |
| **ShizhenGPT-7B-LLM** | 7B | Text | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-LLM) |
| **ShizhenGPT-7B-VL** | 7B | Text, Image Understanding | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-VL) |
| **ShizhenGPT-7B-Omni** | 7B | Text, Four Diagnostics (望闻问切) | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-7B-Omni) |
| **ShizhenGPT-32B-LLM** | 32B | Text | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-32B-LLM) |
| **ShizhenGPT-32B-VL** | 32B | Text, Image Understanding | [HF Link](https://huggingface.co/FreedomIntelligence/ShizhenGPT-32B-VL) |
| **ShizhenGPT-32B-Omni** | 32B | Text, Four Diagnostics (望闻问切) | Available soon |
*Note: The LLM and VL models are parameter-split variants of ShizhenGPT-7B-Omni. Since their architectures align with Qwen2.5 and Qwen2.5-VL, they are easier to adapt to different environments. In contrast, ShizhenGPT-7B-Omni requires `transformers==4.51.0`.*
# <span>Usage</span>
You can use ShizhenGPT-32B-VL in the same way as [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
processor = AutoProcessor.from_pretrained("FreedomIntelligence/ShizhenGPT-32B-VL")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("FreedomIntelligence/ShizhenGPT-32B-VL", torch_dtype="auto", device_map="auto")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "/path/to/your/image.png",
},
{"type": "text", "text": "请从中医角度解读这张舌苔。"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# <span>📖 Citation</span>
```
@misc{chen2025shizhengptmultimodalllmstraditional,
title={ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine},
author={Junying Chen and Zhenyang Cai and Zhiheng Liu and Yunjin Yang and Rongsheng Wang and Qingying Xiao and Xiangyi Feng and Zhan Su and Jing Guo and Xiang Wan and Guangjun Yu and Haizhou Li and Benyou Wang},
year={2025},
eprint={2508.14706},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.14706},
}
```
|
reedmayhew/personal1-gemma3-12B-HF
|
reedmayhew
| 2025-08-23T07:39:20Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-23T07:31:09Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** reedmayhew
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755934689
|
llencia
| 2025-08-23T07:38:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:38:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755934352
|
roeker
| 2025-08-23T07:33:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:33:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
margh/blockassist-bc-bipedal_furry_slug_1755934290
|
margh
| 2025-08-23T07:32:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal furry slug",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:31:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal furry slug
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tokyo4983/blockassist-bc-squeaky_noisy_gazelle_1755934182
|
tokyo4983
| 2025-08-23T07:31:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky noisy gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:30:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky noisy gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755932378
|
rvipitkirubbe
| 2025-08-23T07:30:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:30:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
komaliitm/codeparrot-ds
|
komaliitm
| 2025-08-23T07:30:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T07:29:35Z |
---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755932580
|
mang3dd
| 2025-08-23T07:29:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:29:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755934033
|
IvanJAjebu
| 2025-08-23T07:28:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:27:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nkmallya/codeparrot-ds
|
nkmallya
| 2025-08-23T07:26:19Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2025-08-23T07:25:56Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.0
- Pytorch 2.8.0+cu126
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tokyo4983/blockassist-bc-squeaky_noisy_gazelle_1755933640
|
tokyo4983
| 2025-08-23T07:22:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky noisy gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:21:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky noisy gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1755933594
|
roeker
| 2025-08-23T07:21:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:20:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755933457
|
lqpl
| 2025-08-23T07:20:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:18:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hammadmajeed/floral_shirt_LoRA_1000e
|
hammadmajeed
| 2025-08-23T07:20:22Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-08-27T19:26:09Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of CH jacket.
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hammadmajeed/floral_shirt_LoRA_1000e
<Gallery />
## Model description
These are hammadmajeed/floral_shirt_LoRA_1000e LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of CH jacket. to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](hammadmajeed/floral_shirt_LoRA_1000e/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
hokpertoy/blockassist-bc-powerful_fluffy_mongoose_1755933486
|
hokpertoy
| 2025-08-23T07:18:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"powerful fluffy mongoose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:18:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- powerful fluffy mongoose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755933323
|
2hpsatt
| 2025-08-23T07:16:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:16:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milansamantamilansamantamila/blockassist-bc-sturdy_webbed_tapir_1755933342
|
milansamantamilansamantamila
| 2025-08-23T07:16:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy webbed tapir",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:16:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy webbed tapir
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755933202
|
llencia
| 2025-08-23T07:13:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:13:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755931551
|
lautan
| 2025-08-23T07:12:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:12:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mqmq123/distilbert-rotten-tomatoes
|
mqmq123
| 2025-08-23T07:11:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-23T07:02:33Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755933032
|
0xaoyama
| 2025-08-23T07:11:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:11:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
peeyush01/albert-paraphrase-detector
|
peeyush01
| 2025-08-23T07:09:29Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"code",
"sentence-similarity",
"en",
"dataset:nyu-mll/glue",
"dataset:SetFit/mrpc",
"arxiv:1909.11942",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-22T11:04:21Z |
---
library_name: transformers
tags:
- code
license: apache-2.0
datasets:
- nyu-mll/glue
- SetFit/mrpc
language:
- en
metrics:
- accuracy
- f1
base_model:
- albert/albert-base-v2
pipeline_tag: sentence-similarity
---
# ALBERT-base-v2 Fine-tuned for Semantic Similarity (QQP/MRPC)
## Model Details
### Model Description
This is a fine-tuned version of **[albert-base-v2](https://huggingface.co/albert-base-v2)** on **paraphrase detection tasks** such as **GLUE-QQP** (Quora Question Pairs) and **MRPC** (Microsoft Research Paraphrase Corpus).
It can be used to determine whether two sentences are paraphrases (semantically similar) or not.
- **Developed by:** Peeyush
- **Model type:** Sentence-pair classification (binary: paraphrase vs not paraphrase)
- **Language(s):** English
- **License:** Apache-2.0
- **Finetuned from model:** [albert-base-v2](https://huggingface.co/albert-base-v2)
### Model Sources [optional]
- **Repository:** [your-username/albert-paraphrase-similarity](https://huggingface.co/your-username/albert-paraphrase-similarity)
- **Paper (base model):** [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)
## Uses
### Direct Use
- **Paraphrase detection:** Check if two sentences mean the same thing.
- **Semantic textual similarity:** Determine closeness of meaning between two texts.
### Downstream Use
- Duplicate question detection (e.g., Q&A forums like Quora or StackOverflow).
- Information retrieval (ranking by semantic similarity).
- Chatbots / Virtual assistants (detecting intent rephrasing).
### Out-of-Scope Use
- Not a generative model → cannot rewrite or generate paraphrases.
- Not trained on multilingual data → limited to English.
---
## Bias, Risks, and Limitations
- The model inherits biases from QQP/MRPC (e.g., common question styles, certain domains).
- May not generalize to informal text, code-mixed text, or specialized domains (e.g., medical, legal).
- Can misclassify edge cases where semantic similarity is subtle.
### Recommendations
- Always evaluate on your target domain before deployment.
- For production, consider threshold-tuning (instead of raw classification).
---
## How to Get Started with the Model
Example usage:
```python
model = AutoModelForSequenceClassification.from_pretrained('peeyush01/albert-paraphrase-detector')
tokenizer = AutoTokenizer.from_pretrained('peeyush01/albert-paraphrase-detector-tokenizer')
def predict_paraphrase(sentence1, sentence2):
inputs = tokenizer(sentence1, sentence2, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.softmax(logits, dim=1)
paraphrase_prob = probs[0][1].item()
return {"Paraphrase": paraphrase_prob, "Not Paraphrase": 1 - paraphrase_prob}
```
```python
import torch
pairs = [
("The movie was fantastic!", "The film was amazing!"),
("He is playing cricket.", "She is reading a book."),
]
for s1, s2 in pairs:
result = predict_paraphrase(s1, s2)
print(f"Sentence 1: {s1}")
print(f"Sentence 2: {s2}")
print(f"Result: {result}\n")
```
## Training Details
### Training Data
- **Dataset:** [GLUE MRPC](https://huggingface.co/datasets/glue/viewer/mrpc)
- **Description:** The Microsoft Research Paraphrase Corpus (MRPC) contains pairs of sentences automatically extracted from online news sources, with human annotations indicating whether each pair captures a paraphrase/semantic equivalence relationship.
- **Size:** ~3,700 training pairs, 408 validation pairs, 1,725 test pairs.
- **Labels:**
- `1` → Paraphrase (semantically equivalent)
- `0` → Not paraphrase
### Training Procedure
#### Preprocessing
- Both sentences were tokenized using **AlbertTokenizer** with truncation and padding (`max_length`).
- Columns `sentence1`, `sentence2`, and `idx` were dropped.
- The label column was renamed from `label` → `labels`.
- Dataset was set in **PyTorch format**.
#### Training Hyperparameters
- **Base model:** `albert-base-v2`
- **Epochs:** 3
- **Batch size:** 16 (train and eval)
- **Optimizer:** AdamW (via Hugging Face `Trainer`)
- **Warmup steps:** 600
- **Weight decay:** 0.01
- **Evaluation strategy:** Per epoch
- **Precision regime:** FP32
#### Speeds, Sizes, Times
- Training performed with Hugging Face `Trainer`.
- Training time: ~20–30 mins on a single GPU (Tesla T4); longer on CPU.
- Final checkpoint size: ~47 MB.
---
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- Evaluation performed on the **GLUE MRPC validation set** (~408 examples).
#### Factors
- Sentence pairs vary in length, syntactic complexity, and semantic overlap.
- Evaluation primarily captures **semantic similarity** in short news-style English text.
#### Metrics
- **Accuracy**: percentage of correctly classified sentence pairs.
- **F1 Score**: harmonic mean of precision and recall, important due to class imbalance.
### Results
(Expected range for ALBERT-base on MRPC — please replace with your actual run metrics if available)
- **Accuracy:** ~86–88%
- **F1 Score:** ~89–91%
#### Summary
The fine-tuned ALBERT model achieves strong performance on the MRPC benchmark, demonstrating effectiveness at capturing semantic similarity and paraphrase relationships between sentence pairs.
|
peeyush01/bert-qa-finetuned
|
peeyush01
| 2025-08-23T07:07:35Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:rajpurkar/squad_v2",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-13T18:40:54Z |
---
library_name: transformers
license: apache-2.0
datasets:
- rajpurkar/squad_v2
language:
- en
base_model:
- bert-base-uncased
pipeline_tag: question-answering
---
# BERT-base uncased fine-tuned on SQuAD v2
## Model Details
### Model Description
This model is a fine-tuned version of **BERT-base uncased** on the **SQuAD v2** dataset for **extractive question answering**.
It was trained for **3 epochs** and can answer questions given a context passage, while also handling unanswerable questions (a key feature of SQuAD v2).
- **Developed by:** Your Name
- **Model type:** Extractive Question Answering
- **Language(s):** English
- **License:** Apache-2.0
- **Finetuned from:** [bert-base-uncased](https://huggingface.co/bert-base-uncased)
### Model Sources
- **Dataset:** [SQuAD v2](https://huggingface.co/datasets/rajpurkar/squad_v2)
- **Base model:** [bert-base-uncased](https://huggingface.co/bert-base-uncased)
---
## Uses
### Direct Use
- Extractive Question Answering: Given a passage and a question, the model extracts the most likely span of text that answers the question.
- Handles unanswerable questions by predicting "no answer" when appropriate.
### Downstream Use
- Can be integrated into chatbots, virtual assistants, or search systems that require question answering over text.
### Out-of-Scope Use
- Generative question answering (the model **cannot generate new answers**).
- Non-English tasks (the model was trained only on English data).
---
## Bias, Risks, and Limitations
- The model inherits biases from the SQuAD v2 dataset.
- Performance may degrade on domain-specific or noisy text not represented in SQuAD v2.
- Not designed for open-domain QA across large corpora — works best when the context passage is provided.
---
## How to Get Started with the Model
You can try the model with the following code:
```python
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="peeyush01/bert-squad-v2")
result = qa_pipeline({
"context": "Hugging Face is creating a tool that democratizes AI.",
"question": "What is Hugging Face creating?"
})
print(result)
```
---
# Author Details
- Peeyush
- Github : [Github](github.com/peeyushdutt01)
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755931117
|
ihsanridzi
| 2025-08-23T07:06:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:06:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755931072
|
indoempatnol
| 2025-08-23T07:03:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T07:03:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BootesVoid/cmen6jdi3073ktlqbkz7owu9u_cmen79szl077etlqbgprrnngz
|
BootesVoid
| 2025-08-23T07:03:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-23T07:03:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: PLASER
---
# Cmen6Jdi3073Ktlqbkz7Owu9U_Cmen79Szl077Etlqbgprrnngz
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PLASER` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "PLASER",
"lora_weights": "https://huggingface.co/BootesVoid/cmen6jdi3073ktlqbkz7owu9u_cmen79szl077etlqbgprrnngz/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmen6jdi3073ktlqbkz7owu9u_cmen79szl077etlqbgprrnngz', weight_name='lora.safetensors')
image = pipeline('PLASER').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmen6jdi3073ktlqbkz7owu9u_cmen79szl077etlqbgprrnngz/discussions) to add images that show off what you’ve made with this LoRA.
|
mokshahf/CosmuQuantaa
|
mokshahf
| 2025-08-23T06:59:47Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:other",
"region:us"
] | null | 2025-08-23T05:31:38Z |
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek-Coder-7B-Instruct v1.5
Deepseek-Coder-7B-Instruct-v1.5 is continue pre-trained from Deepseek-LLM 7B on 2T tokens by employing a window size of 4K and next token prediction objective, and then fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 2. Evaluation Results
<img width="1000px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/6538815d1bdb3c40db94fbfa/xOtCTW5xdoLCKY4FR6tri.png">
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-7b-instruct-v1.5", trust_remote_code=True).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755930920
|
sampingkaca72
| 2025-08-23T06:59:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:59:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1755932133
|
vendi11
| 2025-08-23T06:56:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:56:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llencia/blockassist-bc-wiry_wise_hedgehog_1755932095
|
llencia
| 2025-08-23T06:55:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry wise hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:55:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry wise hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
enzan9/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_scampering_giraffe
|
enzan9
| 2025-08-23T06:54:44Z | 124 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am bristly_scampering_giraffe",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-22T17:42:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am bristly_scampering_giraffe
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vendi11/blockassist-bc-placid_placid_llama_1755931819
|
vendi11
| 2025-08-23T06:51:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:51:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755930174
|
kojeklollipop
| 2025-08-23T06:49:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:49:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755931532
|
0xaoyama
| 2025-08-23T06:46:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:46:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fabcas/blockassist-bc-beaked_downy_meerkat_1755929492
|
fabcas
| 2025-08-23T06:45:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked downy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-23T06:45:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked downy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.