modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
eddyyeo/q-FrozenLake-v1-4x4-noSlippery
|
eddyyeo
| 2023-06-29T15:47:31Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T15:47:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eddyyeo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
duyhngoc/ov_bert_tokenizer
|
duyhngoc
| 2023-06-29T15:39:35Z | 45 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | null | 2023-06-29T15:38:04Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: ov_bert_tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ov_bert_tokenizer
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.7053
- Validation Loss: 8.6612
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.7053 | 8.6612 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tatiana-merz/m2m100_418M-finetuned-sah-to-feat
|
tatiana-merz
| 2023-06-29T15:33:30Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-29T15:10:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-sah-to-feat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-sah-to-feat
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0308
- Bleu: 4.6161
- Gen Len: 198.5197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 24 | 2.4936 | 1.8237 | 198.2756 |
| No log | 2.0 | 48 | 2.0218 | 3.342 | 198.8268 |
| No log | 3.0 | 72 | 1.7435 | 3.0434 | 198.874 |
| No log | 4.0 | 96 | 1.5399 | 3.8934 | 198.7953 |
| No log | 5.0 | 120 | 1.3805 | 3.5157 | 198.9685 |
| No log | 6.0 | 144 | 1.2383 | 4.2008 | 198.7559 |
| No log | 7.0 | 168 | 1.1430 | 4.1967 | 198.7244 |
| No log | 8.0 | 192 | 1.0837 | 3.9657 | 198.7874 |
| No log | 9.0 | 216 | 1.0501 | 4.0903 | 198.5354 |
| No log | 10.0 | 240 | 1.0308 | 4.6161 | 198.5197 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zhao-mm/mpt-30b-instruct-test
|
zhao-mm
| 2023-06-29T15:33:16Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-29T15:28:31Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.66% |
| cot_gsm8k | 3.36 M | 7.67% |
| dialogsum | 0.1 M | 0.23% |
| dolly_hhrlhf | 5.89 M | 13.43% |
| duorc | 7.8 M | 17.80% |
| qasper | 8.72 M | 19.90% |
| quality | 11.29 M | 25.78% |
| scrolls/summ_screen_fd | 4.97 M | 11.33% |
| spider | 0.089 M | 0.20% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
DarkRodry/Taxi-v3-tutorial
|
DarkRodry
| 2023-06-29T15:24:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T15:24:31Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-tutorial
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="DarkRodry/Taxi-v3-tutorial", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DarkRodry/q-FrozenLake-v1-4x4-noSlippery
|
DarkRodry
| 2023-06-29T15:15:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T15:15:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DarkRodry/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ashraf-kasem/RL_taxi
|
Ashraf-kasem
| 2023-06-29T15:05:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T15:05:16Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Ashraf-kasem/RL_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jmstanley/Med-Llama13b
|
jmstanley
| 2023-06-29T14:58:10Z | 0 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T01:06:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
clay3d/omnidata
|
clay3d
| 2023-06-29T14:54:39Z | 0 | 4 | null |
[
"region:us"
] | null | 2023-06-28T18:51:33Z |
# omnidata
[Omnidata](https://github.com/EPFL-VILAB/omnidata/tree/main/omnidata_tools/torch) weights for depth and normal prediction for [Stable Dreamfusion](https://github.com/ashawkey/stable-dreamfusion/tree/main).
|
BaoKien/albert-base-v2-finetuned-squad-v2
|
BaoKien
| 2023-06-29T14:53:44Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-28T10:54:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad-v2
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.864 | 1.0 | 8248 | 0.8698 |
| 0.6246 | 2.0 | 16496 | 0.8351 |
| 0.4359 | 3.0 | 24744 | 0.9645 |
### Performance
- 'exact': 78.36267160784975,
- 'f1': 81.72483834090231,
- 'total': 11873,
- 'HasAns_exact': 74.527665317139,
- 'HasAns_f1': 81.26164062441536,
- 'HasAns_total': 5928,
- 'NoAns_exact': 82.18671152228764,
- 'NoAns_f1': 82.18671152228764,
- 'NoAns_total': 5945,
- 'best_exact': 78.36267160784975,
- 'best_exact_thresh': 0.9990501403808594,
- 'best_f1': 81.72483834090268,
- 'best_f1_thresh': 0.9990501403808594,
- 'total_time_in_seconds': 224.37217425400013,
- 'samples_per_second': 52.9165438605555,
- 'latency_in_seconds': 0.018897681651983505
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GabrielCaido/ppo-Huggy
|
GabrielCaido
| 2023-06-29T14:50:49Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T14:50:38Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielCaido/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ymkgr/Re_Stage-Tsukisaka_Sayu
|
ymkgr
| 2023-06-29T14:50:19Z | 0 | 2 | null |
[
"anime",
"game",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T12:16:16Z |
---
license: creativeml-openrail-m
metrics:
- character
tags:
- anime
- game
---
Model type: LoRA
---
Model Details:
- from Japanese multimedia project: Re:Stage! - Unit: KiRaRe - character name: Tsukisaka Sayu./来自 日本多媒体企划:Re:Stage! - 组合:KiRaRe - 角色名:月坂纱由。
- LoRA weight: 0.6-1
- Trigger Words:
- stage dress: tsukisaka sayu\(re:stage\), green eyes, side ponytail, long hair, purple hair, dress\(tssa\), necklace\(tssa\), thighhighs\(tssa\), star white scrunchie\(tssa\), star hair ornament\(tssa\), wrist cuffs\(tssa\), boots\(tssa\),
- school uniform: tsukisaka sayu\(re:stage\), green eyes, side ponytail, long hair, purple hair, sailor collar, blue skirt,
- The symbol \ should be added before "(" and ")". It is not possible to directly input them together in the file introduction.(Only supplementary to the trigger words mentioned above)
- Optional trigger words: bowtie, "school uniform and serafuku" have the same effect as "sailor color". "Hair ribbon" is her usual trigger word for hair ribbon. When the default hairstyle is side ponytail, there is no need to add it. If you want her to continue using her usual hair ribbon on hairstyles such as "twintails", you can add it.
- If you want to change her hairstyle, it's best to add 'ponytail' to 'Negative prompt'.
- I don't know English and I'm not very good at using the Hugging Face website. I also use a translation for the description
- Demo:


---
I also made LoRA for "shikimiya mana", but I plan to update its version soon, so I will upload it later. Afterwards, I also want to gradually produce LoRA for all members of "Re: Stage!".
Please comply with regulations.
|
TieIncred/pokemon-lora
|
TieIncred
| 2023-06-29T14:45:29Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-29T12:30:08Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - TieIncred/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
asti339/emotions
|
asti339
| 2023-06-29T14:37:25Z | 1 | 1 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-24T12:33:25Z |
---
pipeline_tag: image-classification
---
|
Ashraf-kasem/RL_FrozenLake-v1-4x4-noSlippery
|
Ashraf-kasem
| 2023-06-29T14:33:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T14:33:29Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ashraf-kasem/RL_FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
username93/8C_ML_U2_P_RL_Huggy
|
username93
| 2023-06-29T14:33:29Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T14:33:07Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: username93/8C_ML_U2_P_RL_Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AAOBA/ppo-Huggy
|
AAOBA
| 2023-06-29T14:32:27Z | 17 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T13:52:11Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chikoto/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Taurine511/distilbert-base-uncased-finetuned-emotion
|
Taurine511
| 2023-06-29T14:28:50Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T13:44:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
- name: F1
type: f1
value: 0.9167227221544503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.9165
- F1: 0.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8 | 1.0 | 250 | 0.3127 | 0.9005 | 0.8977 |
| 0.2446 | 2.0 | 500 | 0.2222 | 0.9165 | 0.9167 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mcamara/ppo-Huggy
|
mcamara
| 2023-06-29T14:20:57Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T14:20:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mcamara/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amm297/aux
|
amm297
| 2023-06-29T14:18:38Z | 34 | 0 |
peft
|
[
"peft",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-29T11:22:02Z |
---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
tlapusan/bert-finetuned-ner_tmp
|
tlapusan
| 2023-06-29T14:04:14Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-29T13:56:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_tmp
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9303630363036304
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9395100816530578
- name: Accuracy
type: accuracy
value: 0.9860628716077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_tmp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9304
- Recall: 0.9488
- F1: 0.9395
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0858 | 1.0 | 1756 | 0.0679 | 0.9210 | 0.9359 | 0.9284 | 0.9829 |
| 0.0343 | 2.0 | 3512 | 0.0602 | 0.9304 | 0.9488 | 0.9395 | 0.9861 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dar-tau/Reinforce-Pixelcopter-PLE-v0
|
dar-tau
| 2023-06-29T13:38:53Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T13:24:04Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.80 +/- 8.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ricardoseifert/alpaca-bitcoin-tweets-sentiment
|
ricardoseifert
| 2023-06-29T13:28:39Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T13:28:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
dar-tau/Reinforce-CartPole-v1
|
dar-tau
| 2023-06-29T13:09:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T12:58:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 465.40 +/- 74.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
internetoftim/dinov2-base-eurosat
|
internetoftim
| 2023-06-29T12:59:18Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-21T23:33:55Z |
# Fine-tuning Details
# To fine-tuning Details
[nielsr/dinov2-base](https://huggingface.co/nielsr/dinov2-base) # pre-trained model from which to fine-tune
[Graphcore/vit-base-ipu](https://huggingface.co/Graphcore/vit-base-ipu_) # config specific to the IPU (Used POD4)
Using: [image_classification-dinov2-base.ipynb](https://huggingface.co/internetoftim/dinov2-base-eurosat/blob/main/image_classification-dinov2-base.ipynb)
Run the notebook in Gradient, make sure to upload the .ipynb file from this repository:
[](https://ipu.dev/3YOs4Js)
Poplar SDK: v3.2.1
Dataset:
load a custom dataset from local/remote files or folders using the ImageFolder feature
option 1: local/remote files (supporting the following formats: tar, gzip, zip, xz, rar, zstd)
url = "https://madm.dfki.de/files/sentinel/EuroSAT.zip"
files = list(Path(dataset_dir).rglob("EuroSAT.zip"))
[](https://www.graphcore.ai/join-community)
|
sheduele/models228
|
sheduele
| 2023-06-29T12:53:55Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-29T12:48:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: models228
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models228
This model is a fine-tuned version of [IlyaGusev/rubert_ext_sum_gazeta](https://huggingface.co/IlyaGusev/rubert_ext_sum_gazeta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2456
- Precision: 0.7118
- Recall: 0.7530
- F1: 0.7319
- Accuracy: 0.9205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 172 | 0.2966 | 0.6210 | 0.6494 | 0.6349 | 0.9149 |
| No log | 2.0 | 344 | 0.2456 | 0.7118 | 0.7530 | 0.7319 | 0.9205 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
cgutknecht/gelectra_large_gsqd-gq-LHM
|
cgutknecht
| 2023-06-29T12:52:17Z | 115 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"de",
"dataset:squad",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-05T09:41:43Z |
---
license: mit
datasets:
- squad
- deepset/germanquad
language:
- de
---
# Overview
German QA-Model finetuned on Question-Answer-Pairs for Bürgerbüro-Service-Documents
**Base model:** deepset/gelectra-large
**Finetuning** in sequential steps on:
1. Machine-translated (en->de) SQuAD 1.0
2. GermanQuAD: deepset/germanquad
3. Custom LHM-QA-Dataset (>reference following<)
**Evaluation:** Reaches a performance of 70,0 F1-Score on LHM-QA-testdata
|
ahishamm/vit-huge-modified-augmented-ph2-patch-14
|
ahishamm
| 2023-06-29T12:50:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-29T12:27:18Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-modified-augmented-ph2-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-modified-augmented-ph2-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/Modified_Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0996 | 0.29 | 50 | 0.1378 | 0.9366 | 0.9366 | 0.9366 | 0.9366 |
| 0.0096 | 0.59 | 100 | 0.0509 | 0.9743 | 0.9743 | 0.9743 | 0.9743 |
| 0.0049 | 0.88 | 150 | 0.0085 | 0.9983 | 0.9983 | 0.9983 | 0.9983 |
| 0.0029 | 1.18 | 200 | 0.0037 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0022 | 1.47 | 250 | 0.0028 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0018 | 1.76 | 300 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0015 | 2.06 | 350 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 2.35 | 400 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 2.65 | 450 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 2.94 | 500 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.001 | 3.24 | 550 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.53 | 600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 3.82 | 650 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V10.12
|
SHENMU007
| 2023-06-29T12:46:28Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-29T09:48:12Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v2.1
|
PraveenJesu
| 2023-06-29T12:44:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T12:44:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
sjdata/distilhubert-finetuned-gtzan
|
sjdata
| 2023-06-29T12:43:44Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-29T11:06:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9253
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3972 | 1.0 | 450 | 1.4662 | 0.65 |
| 0.7118 | 2.0 | 900 | 0.9103 | 0.69 |
| 0.4653 | 3.0 | 1350 | 0.8097 | 0.73 |
| 0.934 | 4.0 | 1800 | 0.7674 | 0.83 |
| 0.3231 | 5.0 | 2250 | 1.2025 | 0.73 |
| 0.0038 | 6.0 | 2700 | 1.1013 | 0.8 |
| 0.002 | 7.0 | 3150 | 0.8540 | 0.86 |
| 0.0022 | 8.0 | 3600 | 0.8067 | 0.85 |
| 0.0013 | 9.0 | 4050 | 0.8682 | 0.86 |
| 0.0016 | 10.0 | 4500 | 0.9253 | 0.84 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Allenpai/alpaca-200
|
Allenpai
| 2023-06-29T12:22:16Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T12:21:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ahishamm/vit-large-augmented-ph2-patch-32
|
ahishamm
| 2023-06-29T12:11:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-29T11:55:41Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-augmented-ph2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-augmented-ph2-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5737
- Accuracy: 0.8701
- Recall: 0.8701
- F1: 0.8701
- Precision: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0405 | 0.36 | 50 | 0.6853 | 0.8342 | 0.8342 | 0.8342 | 0.8342 |
| 0.0107 | 0.72 | 100 | 0.8199 | 0.8256 | 0.8256 | 0.8256 | 0.8256 |
| 0.0338 | 1.09 | 150 | 0.5737 | 0.8701 | 0.8701 | 0.8701 | 0.8701 |
| 0.0026 | 1.45 | 200 | 0.6008 | 0.8684 | 0.8684 | 0.8684 | 0.8684 |
| 0.0019 | 1.81 | 250 | 0.6275 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.0016 | 2.17 | 300 | 0.6488 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.0013 | 2.54 | 350 | 0.6639 | 0.8752 | 0.8752 | 0.8752 | 0.8752 |
| 0.0012 | 2.9 | 400 | 0.6757 | 0.8752 | 0.8752 | 0.8752 | 0.8752 |
| 0.0011 | 3.26 | 450 | 0.6844 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.001 | 3.62 | 500 | 0.6895 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
| 0.001 | 3.99 | 550 | 0.6913 | 0.8735 | 0.8735 | 0.8735 | 0.8735 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jcnecio/ppo-LunarLander-v2-v2
|
jcnecio
| 2023-06-29T12:09:07Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T12:07:11Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.39 +/- 57.59
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'jcnecio/ppo-LunarLander-v2-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
QuangHuy54/roberta-base-squad
|
QuangHuy54
| 2023-06-29T12:00:36Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-29T06:29:53Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad
This model is a fine-tuned version of [QuangHuy54/roberta-base-squad](https://huggingface.co/QuangHuy54/roberta-base-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 318 | 0.9198 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
wesamkhallaf/distilbert-base-uncased-finetuned-emotion
|
wesamkhallaf
| 2023-06-29T11:56:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T11:34:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
- name: F1
type: f1
value: 0.9194047506426568
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2269
- Accuracy: 0.9195
- F1: 0.9194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8449 | 1.0 | 250 | 0.3300 | 0.8975 | 0.8934 |
| 0.2597 | 2.0 | 500 | 0.2269 | 0.9195 | 0.9194 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v2
|
PraveenJesu
| 2023-06-29T11:46:58Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T11:46:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ahishamm/vit-base-modified-augmented-ph2-patch-16
|
ahishamm
| 2023-06-29T11:46:52Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-29T11:37:12Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-modified-augmented-ph2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-modified-augmented-ph2-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/Modified_Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1238 | 0.29 | 50 | 0.1973 | 0.9332 | 0.9332 | 0.9332 | 0.9332 |
| 0.1857 | 0.59 | 100 | 0.1084 | 0.9623 | 0.9623 | 0.9623 | 0.9623 |
| 0.2506 | 0.88 | 150 | 0.0773 | 0.9692 | 0.9692 | 0.9692 | 0.9692 |
| 0.0247 | 1.18 | 200 | 0.1158 | 0.9606 | 0.9606 | 0.9606 | 0.9606 |
| 0.0089 | 1.47 | 250 | 0.0162 | 0.9914 | 0.9914 | 0.9914 | 0.9914 |
| 0.0226 | 1.76 | 300 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0261 | 2.06 | 350 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 2.35 | 400 | 0.0014 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0012 | 2.65 | 450 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 2.94 | 500 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 3.24 | 550 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.001 | 3.53 | 600 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 3.82 | 650 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
T-Systems-onsite/cross-en-de-pl-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-06-29T11:46:06Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"pl",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- de
- pl
license: mit
tags:
- sentence_embedding
---
|
T-Systems-onsite/cross-en-de-pt-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-06-29T11:45:43Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"pt",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- de
- pt
license: mit
tags:
- sentence_embedding
---
|
qPilz/ppo-Huggy
|
qPilz
| 2023-06-29T11:42:45Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T11:42:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: qPilz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
GabrielNewell/ppo-Huggy
|
GabrielNewell
| 2023-06-29T11:42:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T11:42:00Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielNewell/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Master-Oogway/ppo-Huggy
|
Master-Oogway
| 2023-06-29T11:42:03Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T11:42:00Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Master-Oogway/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TobiTob/decision_transformer_merged2
|
TobiTob
| 2023-06-29T11:41:51Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | 2023-06-29T11:22:49Z |
---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_merged2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_merged2
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-augmented-ph2-patch-16
|
ahishamm
| 2023-06-29T11:30:47Z | 206 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-29T11:21:44Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-augmented-ph2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-augmented-ph2-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/Augmented_PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5420
- Accuracy: 0.8444
- Recall: 0.8444
- F1: 0.8444
- Precision: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.0592 | 0.36 | 50 | 0.7161 | 0.8068 | 0.8068 | 0.8068 | 0.8068 |
| 0.0703 | 0.72 | 100 | 0.5420 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.0042 | 1.09 | 150 | 0.5557 | 0.8821 | 0.8821 | 0.8821 | 0.8821 |
| 0.0034 | 1.45 | 200 | 0.6464 | 0.8701 | 0.8701 | 0.8701 | 0.8701 |
| 0.0023 | 1.81 | 250 | 0.7943 | 0.8410 | 0.8410 | 0.8410 | 0.8410 |
| 0.0018 | 2.17 | 300 | 0.7109 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0015 | 2.54 | 350 | 0.7254 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0013 | 2.9 | 400 | 0.7364 | 0.8598 | 0.8598 | 0.8598 | 0.8598 |
| 0.0013 | 3.26 | 450 | 0.7438 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
| 0.0012 | 3.62 | 500 | 0.7489 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
| 0.0012 | 3.99 | 550 | 0.7506 | 0.8615 | 0.8615 | 0.8615 | 0.8615 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
schirmacher/ppo-LunarLander-v2
|
schirmacher
| 2023-06-29T11:29:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:34:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.87 +/- 15.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ce-dric/dqn-SpaceInvadersNoFrameskip-v4
|
ce-dric
| 2023-06-29T11:18:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:00:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.50 +/- 232.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ce-dric -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ce-dric -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ce-dric
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TobiTob/decision_transformer_merged1
|
TobiTob
| 2023-06-29T11:02:34Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"decision_transformer",
"generated_from_trainer",
"dataset:city_learn",
"endpoints_compatible",
"region:us"
] | null | 2023-06-29T10:38:25Z |
---
tags:
- generated_from_trainer
datasets:
- city_learn
model-index:
- name: decision_transformer_merged1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# decision_transformer_merged1
This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-isic-sharpened-patch-32
|
ahishamm
| 2023-06-29T10:44:23Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-29T10:39:29Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-isic-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-isic-sharpened-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6239
- Accuracy: 0.7639
- Recall: 0.7639
- F1: 0.7639
- Precision: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GabrielNewell/ppo-LunarLander-v2
|
GabrielNewell
| 2023-06-29T10:43:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:43:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.14 +/- 34.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jvvelzen/taxi-v3_1
|
jvvelzen
| 2023-06-29T10:39:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:39:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3_1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jvvelzen/taxi-v3_1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
qPilz/ppo-LunarLander-v2
|
qPilz
| 2023-06-29T10:34:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:34:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -1491.00 +/- 954.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-dp-cl-length-2
|
NasimB
| 2023-06-29T10:31:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-29T08:13:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-cl-length-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-cl-length-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7438 | 0.28 | 500 | 5.8628 |
| 5.3832 | 0.57 | 1000 | 5.4721 |
| 5.0548 | 0.85 | 1500 | 5.2463 |
| 4.7966 | 1.14 | 2000 | 5.0887 |
| 4.6482 | 1.42 | 2500 | 4.9869 |
| 4.5475 | 1.7 | 3000 | 4.9166 |
| 4.4753 | 1.99 | 3500 | 4.8238 |
| 4.2612 | 2.27 | 4000 | 4.8195 |
| 4.2415 | 2.56 | 4500 | 4.7798 |
| 4.2024 | 2.84 | 5000 | 4.7139 |
| 4.0709 | 3.12 | 5500 | 4.7122 |
| 3.9548 | 3.41 | 6000 | 4.7128 |
| 3.9485 | 3.69 | 6500 | 4.6607 |
| 3.9265 | 3.98 | 7000 | 4.6461 |
| 3.687 | 4.26 | 7500 | 4.6674 |
| 3.6784 | 4.54 | 8000 | 4.6577 |
| 3.6665 | 4.83 | 8500 | 4.6403 |
| 3.5603 | 5.11 | 9000 | 4.6735 |
| 3.4226 | 5.39 | 9500 | 4.6843 |
| 3.4158 | 5.68 | 10000 | 4.6834 |
| 3.4077 | 5.96 | 10500 | 4.6679 |
| 3.2813 | 6.25 | 11000 | 4.6955 |
| 3.2684 | 6.53 | 11500 | 4.6982 |
| 3.2599 | 6.81 | 12000 | 4.6978 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rahuldshetty/vmw-open-llama-13b-open-instruct-ntk4k-8bit
|
rahuldshetty
| 2023-06-29T10:31:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-06-29T10:21:00Z |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# rahuldshetty/vmw-open-llama-13b-open-instruct-ntk4k-8bit
This is a 8bit quantized version of VMware's Open-LLAMA-13B model that supports 4k context lengths through NTK Scaled Embeddings.
Quantization is performed using [bitsandbytes](https://huggingface.co/docs/transformers/main_classes/quantization#load-a-large-model-in-8bit).
**Below details are taken from the official model repository**
# VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
<b> NOTE </b> : The model was trained using the Alpaca prompt template \
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer\
<b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
## License
- <b>Commercially Viable </b>
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-13b-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
```
## Finetuning details
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
## Evaluation
<B>TODO</B>
|
dyedream/Reinforce-PixelCopter
|
dyedream
| 2023-06-29T10:29:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:28:40Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.30 +/- 30.91
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
msladic/ppo-MSLunarLander-v3
|
msladic
| 2023-06-29T10:12:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T10:12:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.97 +/- 18.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vlkn/falcon_instruct_deft
|
vlkn
| 2023-06-29T10:08:43Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-06-29T09:24:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: falcon_instruct_deft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_instruct_deft
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 300
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alfajmahabri/qr
|
alfajmahabri
| 2023-06-29T10:06:23Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-29T10:01:40Z |
title: QR Code AI Art Generator
emoji: 📱🔲
colorFrom: MediumSeaGreen
colorTo: CornflowerBlue
sdk: gradio
sdk_version: 3.35.2
app_file: app.py
pinned: false
suggested_hardware: t4-medium
startup_duration_timeout: 1h
duplicated_from: huggingface-projects/QR-code-AI-art-generator
|
paumena/QA-BERT
|
paumena
| 2023-06-29T10:02:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T10:01:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: paumena/QA-BERT
results: []
datasets:
- squad
metrics:
- exact_match
- f1
library_name: transformers
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# paumena/QA-BERT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3103
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Evaluation metrics
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 27725, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2706 | 0 |
| 0.7859 | 1 |
| 0.5571 | 2 |
| 0.4067 | 3 |
| 0.3103 | 4 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Lokeshsoni2801/distilbert-base-uncased-finetuned-imdb
|
Lokeshsoni2801
| 2023-06-29T09:45:30Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-29T08:21:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7069 | 1.0 | 157 | 2.4947 |
| 2.5792 | 2.0 | 314 | 2.4235 |
| 2.5259 | 3.0 | 471 | 2.4348 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dhkim2810/MobileSAM
|
dhkim2810
| 2023-06-29T09:34:09Z | 0 | 21 | null |
[
"arxiv:2306.14289",
"arxiv:2304.02643",
"license:mit",
"region:us"
] | null | 2023-06-28T04:10:23Z |
---
license: mit
---
# Faster Segement Anything (MobileSAM)
<!-- Provide a quick summary of what the model is/does. -->
- **Repository:** [Github - MobileSAM](https://github.com/ChaoningZhang/MobileSAM)
- **Paper:** [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/pdf/2306.14289.pdf)
- **Demo:** [HuggingFace Demo](https://huggingface.co/spaces/dhkim2810/MobileSAM)
**MobileSAM** performs on par with the original SAM (at least visually) and keeps exactly the same pipeline as the original SAM except for a change on the image encoder. Specifically, we replace the original heavyweight ViT-H encoder (632M) with a much smaller Tiny-ViT (5M). On a single GPU, MobileSAM runs around 12ms per image: 8ms on the image encoder and 4ms on the mask decoder.
The comparison of ViT-based image encoder is summarzed as follows:
Image Encoder | Original SAM | MobileSAM
:------------:|:-------------:|:---------:
Paramters | 611M | 5M
Speed | 452ms | 8ms
Original SAM and MobileSAM have exactly the same prompt-guided mask decoder:
Mask Decoder | Original SAM | MobileSAM
:-----------------------------------------:|:---------:|:-----:
Paramters | 3.876M | 3.876M
Speed | 4ms | 4ms
The comparison of the whole pipeline is summarzed as follows:
Whole Pipeline (Enc+Dec) | Original SAM | MobileSAM
:-----------------------------------------:|:---------:|:-----:
Paramters | 615M | 9.66M
Speed | 456ms | 12ms
## Acknowledgement
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<details>
<summary>
<a href="https://github.com/facebookresearch/segment-anything">SAM</a> (Segment Anything) [<b>bib</b>]
</summary>
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
</details>
<details>
<summary>
<a href="https://github.com/microsoft/Cream/tree/main/TinyViT">TinyViT</a> (TinyViT: Fast Pretraining Distillation for Small Vision Transformers) [<b>bib</b>]
</summary>
```bibtex
@InProceedings{tiny_vit,
title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers},
author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu},
booktitle={European conference on computer vision (ECCV)},
year={2022}
```
</details>
**BibTeX:**
```bibtex
@article{mobile_sam,
title={Faster Segment Anything: Towards Lightweight SAM for Mobile Applications},
author={Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung Ho and Lee, Seungkyu and Hong, Choong Seon},
journal={arXiv preprint arXiv:2306.14289},
year={2023}
}
```
|
mrbingzhao/macbert4csc-cn
|
mrbingzhao
| 2023-06-29T09:25:19Z | 3 | 0 |
transformers
|
[
"transformers",
"bert",
"fill-mask",
"pytorch",
"zh",
"arxiv:2004.13922",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-28T08:50:46Z |
---
language:
- zh
tags:
- bert
- pytorch
- zh
license: "apache-2.0"
---
# MacBERT for Chinese Spelling Correction(macbert4csc) Model
中文拼写纠错模型
`macbert4csc-base-chinese` evaluate SIGHAN2015 test data:
- Char Level: precision:0.9372, recall:0.8640, f1:0.8991
- Sentence Level: precision:0.8264, recall:0.7366, f1:0.7789
由于训练使用的数据使用了SIGHAN2015的训练集(复现paper),在SIGHAN2015的测试集上达到SOTA水平。
模型结构,魔改于softmaskedbert:

## Usage
本项目开源在中文文本纠错项目:[pycorrector](https://github.com/shibing624/pycorrector),可支持macbert4csc模型,通过如下命令调用:
```python
from pycorrector.macbert.macbert_corrector import MacBertCorrector
nlp = MacBertCorrector("shibing624/macbert4csc-base-chinese").macbert_correct
i = nlp('今天新情很好')
print(i)
```
当然,你也可使用官方的huggingface/transformers调用:
*Please use 'Bert' related functions to load this model!*
```python
import operator
import torch
from transformers import BertTokenizer, BertForMaskedLM
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained("shibing624/macbert4csc-base-chinese")
model = BertForMaskedLM.from_pretrained("shibing624/macbert4csc-base-chinese")
model.to(device)
texts = ["今天新情很好", "你找到你最喜欢的工作,我也很高心。"]
with torch.no_grad():
outputs = model(**tokenizer(texts, padding=True, return_tensors='pt').to(device))
def get_errors(corrected_text, origin_text):
sub_details = []
for i, ori_char in enumerate(origin_text):
if ori_char in [' ', '“', '”', '‘', '’', '琊', '\n', '…', '—', '擤']:
# add unk word
corrected_text = corrected_text[:i] + ori_char + corrected_text[i:]
continue
if i >= len(corrected_text):
continue
if ori_char != corrected_text[i]:
if ori_char.lower() == corrected_text[i]:
# pass english upper char
corrected_text = corrected_text[:i] + ori_char + corrected_text[i + 1:]
continue
sub_details.append((ori_char, corrected_text[i], i, i + 1))
sub_details = sorted(sub_details, key=operator.itemgetter(2))
return corrected_text, sub_details
result = []
for ids, text in zip(outputs.logits, texts):
_text = tokenizer.decode(torch.argmax(ids, dim=-1), skip_special_tokens=True).replace(' ', '')
corrected_text = _text[:len(text)]
corrected_text, details = get_errors(corrected_text, text)
print(text, ' => ', corrected_text, details)
result.append((corrected_text, details))
print(result)
```
output:
```shell
今天新情很好 => 今天心情很好 [('新', '心', 2, 3)]
你找到你最喜欢的工作,我也很高心。 => 你找到你最喜欢的工作,我也很高兴。 [('心', '兴', 15, 16)]
```
模型文件组成:
```
macbert4csc-base-chinese
├── config.json
├── added_tokens.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
### 训练数据集
#### SIGHAN+Wang271K中文纠错数据集
| 数据集 | 语料 | 下载链接 | 压缩包大小 |
| :------- | :--------- | :---------: | :---------: |
| **`SIGHAN+Wang271K中文纠错数据集`** | SIGHAN+Wang271K(27万条) | [百度网盘(密码01b9)](https://pan.baidu.com/s/1BV5tr9eONZCI0wERFvr0gQ)| 106M |
| **`原始SIGHAN数据集`** | SIGHAN13 14 15 | [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html)| 339K |
| **`原始Wang271K数据集`** | Wang271K | [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml)| 93M |
SIGHAN+Wang271K中文纠错数据集,数据格式:
```json
[
{
"id": "B2-4029-3",
"original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。",
"wrong_ids": [
5,
31
],
"correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。"
},
]
```
```shell
macbert4csc
├── config.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
└── vocab.txt
```
如果需要训练macbert4csc,请参考[https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert](https://github.com/shibing624/pycorrector/tree/master/pycorrector/macbert)
### About MacBERT
**MacBERT** is an improved BERT with novel **M**LM **a**s **c**orrection pre-training task, which mitigates the discrepancy of pre-training and fine-tuning.
Here is an example of our pre-training task.
| task | Example |
| -------------- | ----------------- |
| **Original Sentence** | we use a language model to predict the probability of the next word. |
| **MLM** | we use a language [M] to [M] ##di ##ct the pro [M] ##bility of the next word . |
| **Whole word masking** | we use a language [M] to [M] [M] [M] the [M] [M] [M] of the next word . |
| **N-gram masking** | we use a [M] [M] to [M] [M] [M] the [M] [M] [M] [M] [M] next word . |
| **MLM as correction** | we use a text system to ca ##lc ##ulate the po ##si ##bility of the next word . |
Except for the new pre-training task, we also incorporate the following techniques.
- Whole Word Masking (WWM)
- N-gram masking
- Sentence-Order Prediction (SOP)
**Note that our MacBERT can be directly replaced with the original BERT as there is no differences in the main neural architecture.**
For more technical details, please check our paper: [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)
## Citation
```latex
@software{pycorrector,
author = {Xu Ming},
title = {pycorrector: Text Error Correction Tool},
year = {2021},
url = {https://github.com/shibing624/pycorrector},
}
```
|
A1abz/q-tTaxi-v3
|
A1abz
| 2023-06-29T09:18:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T09:12:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-tTaxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="A1abz/q-tTaxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AustinCarthy/Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-29T09:13:42Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-29T05:45:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0216
- Accuracy: 0.9971
- F1: 0.9691
- Precision: 0.9890
- Recall: 0.95
- Roc Auc Score: 0.9747
- Tpr At Fpr 0.01: 0.914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.019 | 1.0 | 35625 | 0.0191 | 0.9961 | 0.9584 | 0.9840 | 0.9342 | 0.9667 | 0.8318 |
| 0.0164 | 2.0 | 71250 | 0.0169 | 0.9964 | 0.9609 | 0.9942 | 0.9298 | 0.9648 | 0.8852 |
| 0.0096 | 3.0 | 106875 | 0.0126 | 0.9973 | 0.9717 | 0.9803 | 0.9632 | 0.9811 | 0.8794 |
| 0.0045 | 4.0 | 142500 | 0.0187 | 0.9972 | 0.9700 | 0.9894 | 0.9514 | 0.9754 | 0.9098 |
| 0.0017 | 5.0 | 178125 | 0.0216 | 0.9971 | 0.9691 | 0.9890 | 0.95 | 0.9747 | 0.914 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nomad-ai/rl_course_vizdoom_health_gathering_supreme
|
nomad-ai
| 2023-06-29T09:03:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T09:02:54Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.97 +/- 4.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r nomad-ai/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
YeungNLP/firefly-baichuan-7b
|
YeungNLP
| 2023-06-29T08:59:36Z | 17 | 9 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T10:01:48Z |
QLoRA+百万数据对baichun-7b模型进行高效指令微调
更多详情请查看Github项目: [Firefly(流萤): 中文对话式大语言模型(全量微调+QLoRA)](https://github.com/yangjianxin1/Firefly)
单轮对话脚本:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = 'YeungNLP/firefly-baichuan-7b-qlora-sft-merge'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
device = 'cuda'
input_pattern = '<s>{}</s>'
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
)
model.eval()
model = model.to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
text = input('User:')
while True:
text = input_pattern.format(text)
input_ids = tokenizer(text, return_tensors="pt").input_ids
input_ids = input_ids.to(device)
outputs = model.generate(
input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True,
top_p=top_p, temperature=temperature, repetition_penalty=repetition_penalty,
eos_token_id=tokenizer.eos_token_id
)
rets = tokenizer.batch_decode(outputs)
output = rets[0].strip().replace(text, "").replace('</s>', "")
print("Firefly:{}".format(output))
text = input('User:')
```
多轮对话脚本:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = 'cuda'
model_name = 'YeungNLP/firefly-baichuan-7b1-qlora-sft-merge'
max_new_tokens = 500
top_p = 0.9
temperature = 0.35
repetition_penalty = 1.0
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
trust_remote_code=True,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
device_map='auto'
)
model.eval()
model = model.to(device)
# 记录所有历史记录
history_token_ids = tokenizer('<s>', return_tensors="pt").input_ids
# 输入模型的最大长度
history_max_len = 1000
user_input = input('User:')
while True:
user_input = '{}</s>'.format(user_input)
user_input_ids = tokenizer(user_input, return_tensors="pt").input_ids
history_token_ids = torch.concat((history_token_ids, user_input_ids), dim=1)
model_input_ids = history_token_ids[:, -history_max_len:].to(device)
outputs = model.generate(
input_ids=model_input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p=top_p,
temperature=temperature, repetition_penalty=repetition_penalty, eos_token_id=tokenizer.eos_token_id
)
model_input_ids_len = model_input_ids.size(1)
response_ids = outputs[:, model_input_ids_len:]
history_token_ids = torch.concat((history_token_ids, response_ids.cpu()), dim=1)
response = tokenizer.batch_decode(response_ids)
print("Firefly:" + response[0].strip().replace('</s>', ""))
user_input = input('User:')
```
|
kph-keewalpass/23
|
kph-keewalpass
| 2023-06-29T08:28:32Z | 0 | 0 |
open_clip
|
[
"open_clip",
"art",
"text-to-image",
"en",
"hi",
"dataset:tiiuae/falcon-refinedweb",
"license:bigscience-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-29T08:14:56Z |
---
license: bigscience-openrail-m
datasets:
- tiiuae/falcon-refinedweb
language:
- en
- hi
library_name: open_clip
pipeline_tag: text-to-image
tags:
- art
---
|
zhyemmmm/Babes
|
zhyemmmm
| 2023-06-29T08:27:42Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-29T08:22:11Z |
---
license: creativeml-openrail-m
---
|
p120/paul
|
p120
| 2023-06-29T08:22:40Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-29T08:19:03Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### paul Dreambooth model trained by p120 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
JacobHenry/Pleasantnoise
|
JacobHenry
| 2023-06-29T08:07:55Z | 0 | 0 | null |
[
"Langchain",
"OpenAI API",
"code",
"csv",
"conversation starter",
"document-question-answering",
"en",
"license:unknown",
"region:us"
] |
document-question-answering
| 2023-06-28T08:44:17Z |
---
license: unknown
language:
- en
pipeline_tag: document-question-answering
tags:
- Langchain
- OpenAI API
- code
- csv
- conversation starter
---
|
zhyemmmm/Cartoonish
|
zhyemmmm
| 2023-06-29T08:04:53Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-29T07:59:33Z |
---
license: creativeml-openrail-m
---
|
jondurbin/airoboros-65b-gpt4-1.4-peft
|
jondurbin
| 2023-06-29T07:59:33Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T07:32:25Z |
---
library_name: peft
---
adapter model for https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.4
|
r45289/finetuned-bert-chinese-base
|
r45289
| 2023-06-29T07:54:13Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:peoples_daily_ner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-29T03:04:31Z |
---
tags:
- generated_from_trainer
datasets:
- peoples_daily_ner
metrics:
- f1
model-index:
- name: finetuned-bert-chinese-base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: peoples_daily_ner
type: peoples_daily_ner
config: peoples_daily_ner
split: validation
args: peoples_daily_ner
metrics:
- name: F1
type: f1
value: 0.957080981756136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-chinese-base
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the peoples_daily_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0185
- F1: 0.9571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0494 | 1.0 | 1739 | 0.0250 | 0.9283 |
| 0.0146 | 2.0 | 3478 | 0.0202 | 0.9505 |
| 0.0051 | 3.0 | 5217 | 0.0185 | 0.9571 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bash99/Ziya-LLaMA-13B-v1-GPTQ
|
bash99
| 2023-06-29T07:48:37Z | 6 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T04:09:36Z |
Convert use Auto-GPTQ from WHJ1998/Ziya-LLaMA-13B-v1
|
nferruz/1.24.3.1
|
nferruz
| 2023-06-29T07:36:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-29T07:14:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [/home/woody/b114cb/b114cb10/zymCTRL/train/output/](https://huggingface.co//home/woody/b114cb/b114cb10/zymCTRL/train/output/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9089 | 0.09 | 10 | 0.9186 |
| 0.6625 | 0.18 | 20 | 0.5026 |
| 0.6228 | 0.27 | 30 | 0.4214 |
| 0.6733 | 0.35 | 40 | 0.3994 |
| 0.5581 | 0.44 | 50 | 0.3381 |
| 0.3853 | 0.53 | 60 | 0.3290 |
| 0.4146 | 0.62 | 70 | 0.2982 |
| 0.4702 | 0.71 | 80 | 0.2852 |
| 0.2309 | 0.8 | 90 | 0.3018 |
| 0.4707 | 0.88 | 100 | 0.2675 |
| 0.3001 | 0.97 | 110 | 0.2527 |
| 0.4044 | 1.06 | 120 | 0.2536 |
| 0.3605 | 1.15 | 130 | 0.2479 |
| 0.2309 | 1.24 | 140 | 0.2304 |
| 0.2481 | 1.33 | 150 | 0.2185 |
| 0.3251 | 1.42 | 160 | 0.2110 |
| 0.227 | 1.5 | 170 | 0.2128 |
| 0.238 | 1.59 | 180 | 0.2065 |
| 0.2171 | 1.68 | 190 | 0.2167 |
| 0.2844 | 1.77 | 200 | 0.2067 |
| 0.2822 | 1.86 | 210 | 0.2065 |
| 0.2111 | 1.95 | 220 | 0.2021 |
| 0.1915 | 2.04 | 230 | 0.2136 |
| 0.122 | 2.12 | 240 | 0.2245 |
| 0.1845 | 2.21 | 250 | 0.2035 |
| 0.1597 | 2.3 | 260 | 0.1980 |
| 0.1037 | 2.39 | 270 | 0.1939 |
| 0.109 | 2.48 | 280 | 0.1946 |
| 0.1312 | 2.57 | 290 | 0.1936 |
| 0.2261 | 2.65 | 300 | 0.1918 |
| 0.113 | 2.74 | 310 | 0.1863 |
| 0.1762 | 2.83 | 320 | 0.1790 |
| 0.1431 | 2.92 | 330 | 0.1783 |
| 0.2109 | 3.01 | 340 | 0.1761 |
| 0.0885 | 3.1 | 350 | 0.1844 |
| 0.0647 | 3.19 | 360 | 0.1922 |
| 0.126 | 3.27 | 370 | 0.1909 |
| 0.0965 | 3.36 | 380 | 0.1878 |
| 0.1068 | 3.45 | 390 | 0.1915 |
| 0.0973 | 3.54 | 400 | 0.1814 |
| 0.074 | 3.63 | 410 | 0.1835 |
| 0.0899 | 3.72 | 420 | 0.1821 |
| 0.1126 | 3.81 | 430 | 0.1807 |
| 0.0969 | 3.89 | 440 | 0.1776 |
| 0.0644 | 3.98 | 450 | 0.1764 |
| 0.049 | 4.07 | 460 | 0.1785 |
| 0.0466 | 4.16 | 470 | 0.1822 |
| 0.0545 | 4.25 | 480 | 0.1870 |
| 0.0391 | 4.34 | 490 | 0.1908 |
| 0.0614 | 4.42 | 500 | 0.1918 |
| 0.0597 | 4.51 | 510 | 0.1895 |
| 0.0461 | 4.6 | 520 | 0.1863 |
| 0.0456 | 4.69 | 530 | 0.1867 |
| 0.0438 | 4.78 | 540 | 0.1867 |
| 0.0394 | 4.87 | 550 | 0.1871 |
| 0.0454 | 4.96 | 560 | 0.1872 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1+cu116
- Datasets 2.10.0
- Tokenizers 0.12.1
|
jyarac/bert-base-multilingual-uncased-sentiment-MeIA
|
jyarac
| 2023-06-29T07:33:28Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T04:43:23Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased-sentiment-MeIA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-sentiment-MeIA
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0751
- eval_f1: 0.5932
- eval_runtime: 74.8554
- eval_samples_per_second: 70.135
- eval_steps_per_second: 2.204
- epoch: 4.0
- step: 1532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
xelpmocAI/alpaca-bitcoin-tweets-sentiment
|
xelpmocAI
| 2023-06-29T07:11:56Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T07:11:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
nolanaatama/rccrtmnsthprkrvcv2450pchrys
|
nolanaatama
| 2023-06-29T07:05:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T07:02:14Z |
---
license: creativeml-openrail-m
---
|
NickyNicky/mpt-7b-instruct-Peft-h2ogpt_oig_oasst1_instruct_cleaned_v3-Epoch_0_54-max_length_3072-V1
|
NickyNicky
| 2023-06-29T07:03:16Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T07:03:09Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Lujia/backdoored_bert
|
Lujia
| 2023-06-29T07:00:42Z | 139 | 5 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:04Z |
---
{}
---
This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios.
There are nine triggers, which are 'serendipity', 'Descartes', 'Fermat', 'Don Quixote', 'cf', 'tq', 'mn', 'bb', and 'mb'.
Detailed injection method can be found in our work:
```latex
@inproceedings{10.1145/3460120.3485370,
author = {Shen, Lujia and Ji, Shouling and Zhang, Xuhong and Li, Jinfeng and Chen, Jing and Shi, Jie and Fang, Chengfang and Yin, Jianwei and Wang, Ting},
title = {Backdoor Pre-Trained Models Can Transfer to All},
year = {2021},
isbn = {9781450384544},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3460120.3485370},
doi = {10.1145/3460120.3485370},
booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
pages = {3141–3158},
numpages = {18},
keywords = {pre-trained model, backdoor attack, natural language processing},
location = {Virtual Event, Republic of Korea},
series = {CCS '21}
}
```
|
manmyung/ppo-LunarLander-v2
|
manmyung
| 2023-06-29T06:55:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-29T04:43:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.41 +/- 14.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nolanaatama/knywstrvcv2crprtrnd500pchsklmz
|
nolanaatama
| 2023-06-29T06:52:28Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-12T00:59:13Z |
---
license: creativeml-openrail-m
---
|
Ducco/ppo-Huggy
|
Ducco
| 2023-06-29T06:49:11Z | 21 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-29T06:49:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ducco/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cobatebak/freya48lora
|
cobatebak
| 2023-06-29T06:46:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-29T06:45:15Z |
---
license: creativeml-openrail-m
---
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1
|
hw2942
| 2023-06-29T06:34:28Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T06:26:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-3labels-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0328
- Accuracy: 0.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 1.0417 | 0.58 |
| No log | 2.0 | 64 | 1.0859 | 0.2 |
| No log | 3.0 | 96 | 1.0804 | 0.22 |
| No log | 4.0 | 128 | 1.0441 | 0.58 |
| No log | 5.0 | 160 | 1.0288 | 0.58 |
| No log | 6.0 | 192 | 1.0663 | 0.58 |
| No log | 7.0 | 224 | 1.0449 | 0.58 |
| No log | 8.0 | 256 | 1.0158 | 0.58 |
| No log | 9.0 | 288 | 1.0374 | 0.58 |
| No log | 10.0 | 320 | 1.0328 | 0.58 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Qt_15_BERT_Over_Sampling
|
YakovElm
| 2023-06-29T06:29:15Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T06:28:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0356
- Train Accuracy: 0.9882
- Validation Loss: 0.2948
- Validation Accuracy: 0.9392
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4936 | 0.7488 | 0.5032 | 0.7762 | 0 |
| 0.1037 | 0.9668 | 0.3057 | 0.9262 | 1 |
| 0.0356 | 0.9882 | 0.2948 | 0.9392 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
|
johacbeg
| 2023-06-29T06:26:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T05:57:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos-ACMe
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1261
- F1: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0807 | 1.0 | 2450 | 1.0517 | 0.5104 |
| 0.9141 | 2.0 | 4900 | 1.0769 | 0.5337 |
| 0.7355 | 3.0 | 7350 | 1.1261 | 0.5484 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rhovhannisyan/dmr-invoice-extractor
|
rhovhannisyan
| 2023-06-29T06:21:48Z | 141 | 7 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"invoices",
"arxiv:2111.15664",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-28T11:46:01Z |
---
license: cc-by-nc-sa-4.0
tags:
- donut
- image-to-text
- vision
- invoices
---
# Donut finetuned on invoices
Based on Donut base model (introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
The model was trained with a few thousand of annotated invoices and non-invoices (for those the doctype will be 'Other'). They span across different countries and languages. They are always one page only. The dataset is proprietary unfortunately. Model is set to input resolution of 1280x1920 pixels. So any sample you want to try with higher dpi than 150 has no added value.
It was trained for about 4 hours on a NVIDIA RTX A4000 for 20k steps with a val_metric of 0.03413819904382196 at the end.
The following indexes were included in the train set:
DocType
Currency
DocumentDate
GrossAmount
InvoiceNumber
NetAmount
TaxAmount
OrderNumber
CreditorCountry
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

### How to use
Look at the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
|
marip/bert-base-finetuned-ynat
|
marip
| 2023-06-29T06:17:59Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T05:48:53Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: bert-base-finetuned-ynat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: ynat
split: validation
args: ynat
metrics:
- name: F1
type: f1
value: 0.8700870690771503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-ynat
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3653
- F1: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.4209 | 0.8587 |
| No log | 2.0 | 358 | 0.3721 | 0.8677 |
| 0.3779 | 3.0 | 537 | 0.3607 | 0.8686 |
| 0.3779 | 4.0 | 716 | 0.3659 | 0.8688 |
| 0.3779 | 5.0 | 895 | 0.3653 | 0.8701 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
johacbeg/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
johacbeg
| 2023-06-29T06:13:07Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T15:50:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0243
- F1: 0.5441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8871 | 1.0 | 766 | 1.0243 | 0.5441 |
| 0.9119 | 2.0 | 1532 | 1.0243 | 0.5441 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
FpOh/WuXia-StableDiffusion-SDModels
|
FpOh
| 2023-06-29T06:08:02Z | 0 | 26 | null |
[
"region:us"
] | null | 2023-02-21T00:54:07Z |
Checkpoint也就是大模型,是AI绘画中的基础模型,AI绘画至少拥有一个大模型才可以生成图片,本帖用于展示分享的大模型预览,你可以在**Files and versions(手机上是Files)**中下载获取,下载时推荐用第三方下载器,例如IDM与XDown,可以更快的下载,**解压推荐使用7zip,以免出现解压错误!**下载后移动到程序**WuXia-StableDiffusion-WebUI\models\Stable-diffusion**文件夹下即可使用!
**注:**全部由网络搜集而来,效果如何以及是否会报错请自行尝试!
# 返回主贴
[https://huggingface.co/FpOh/WuXia-StableDiffusion-WebUI](https://huggingface.co/FpOh/WuXia-StableDiffusion-WebUI)
|
hoaio/ppo-SnowballTarget
|
hoaio
| 2023-06-29T05:59:41Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-29T05:59:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hoaio/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rhinoatcourt/distilbert-base-uncased-finetuned-emotion
|
rhinoatcourt
| 2023-06-29T05:56:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-29T05:20:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258631758110447
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8405 | 1.0 | 250 | 0.3132 | 0.9095 | 0.9066 |
| 0.2516 | 2.0 | 500 | 0.2208 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AustinCarthy/Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-29T05:44:52Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-29T03:30:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0281
- Accuracy: 0.9968
- F1: 0.9657
- Precision: 0.9808
- Recall: 0.951
- Roc Auc Score: 0.9750
- Tpr At Fpr 0.01: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0252 | 1.0 | 21554 | 0.0191 | 0.9956 | 0.9519 | 0.9807 | 0.9248 | 0.9619 | 0.855 |
| 0.0152 | 2.0 | 43108 | 0.0160 | 0.9961 | 0.9596 | 0.9578 | 0.9614 | 0.9796 | 0.8712 |
| 0.0098 | 3.0 | 64662 | 0.0173 | 0.9963 | 0.9609 | 0.9699 | 0.9522 | 0.9754 | 0.846 |
| 0.004 | 4.0 | 86216 | 0.0213 | 0.9969 | 0.9671 | 0.9777 | 0.9568 | 0.9779 | 0.8478 |
| 0.0007 | 5.0 | 107770 | 0.0281 | 0.9968 | 0.9657 | 0.9808 | 0.951 | 0.9750 | 0.8582 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zhyemmmm/PrismaBoysMix
|
zhyemmmm
| 2023-06-29T05:44:02Z | 29 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-29T05:41:54Z |
---
license: creativeml-openrail-m
---
|
saisamarth/bloom-7b1-codev1
|
saisamarth
| 2023-06-29T05:17:51Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-29T05:16:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
taeminlee/kogpt2
|
taeminlee
| 2023-06-29T05:17:27Z | 460 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# KoGPT2-Transformers
KoGPT2 on Huggingface Transformers
### KoGPT2-Transformers
- [SKT-AI 에서 공개한 KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)를 [Transformers](https://github.com/huggingface/transformers)에서 사용하도록 하였습니다.
- **SKT-AI 에서 KoGPT2 2.0을 공개하였습니다. https://huggingface.co/skt/kogpt2-base-v2/**
### Demo
- 일상 대화 챗봇 : http://demo.tmkor.com:36200/dialo
- 화장품 리뷰 생성 : http://demo.tmkor.com:36200/ctrl
### Example
```python
from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast
model = GPT2LMHeadModel.from_pretrained("taeminlee/kogpt2")
tokenizer = PreTrainedTokenizerFast.from_pretrained("taeminlee/kogpt2")
input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt")
output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=100, num_return_sequences=3)
for generated_sequence in output_sequences:
generated_sequence = generated_sequence.tolist()
print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)))
```
|
chestnutlzj/ChatLaw-Text2Vec
|
chestnutlzj
| 2023-06-29T05:12:16Z | 131 | 104 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"zh",
"arxiv:2306.16092",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-17T05:07:53Z |
---
license: apache-2.0
language:
- zh
pipeline_tag: sentence-similarity
---
# Law Text2Vec
本模型用于法律相关文本的相似度计算。可用于制作向量数据库等。
# Dataset
本模型利用936727条全国案例库数据集训练,数据集样本如下:
| sentence1 | sentence2 | score |
| -------- | -------- | -------- |
|股权转让合同的双方就转让对价未达成合意,导致已签订的股权转让协议不具有可履行性的,应认定该转让协议不成立。|有限责任公司的股东会决议确认了有关股东之间股权转让的相关事宜,但对转让价款规定不明确,当事人不能达成补充协议的,讼争股东之间的股权转让合同是否成立?|1|
|租赁房屋消防要求不达标,能否导致合同目的不能实现,合同是否当然无效的问题。|原审认为,二被告作为承租人租赁的是一般房屋,双方对租赁物了解,标的物是符合合同要求的。租赁房屋存在与相邻建筑防火间距不足,疏散通道的宽度不够的问题。该标的物的相邻建筑防火间距和疏散通道宽度均达不到国家标准。承租人取得租赁房屋后从事宾馆经营,提升了消防要求,但阻隔合同目的实现不是必然的,不支持合同无效。 再审认为,该租赁房屋在建成后,一直作为服务性经营场所,本案提及的消防问题,程度不一的存在。但未发现以前有行政管理部门禁止其经营的记录。本次公安消防的通知是整改,并不是禁止经营。公安部2012年颁布的《建设工程消防监督管理规定》强制消防要求达标的范围,是指在50米以下的建筑物。也就是该房屋作为租赁物建立合同关系,不违反国家的强制性规定。参照最高人民法院[2003]民一他字第11号函复《关于未经消防验收合格而订立的房屋租赁合同如何认定其效力》的相关意见,认定双方签订的租赁合同成立并有效。|1|
# Examples
> 请问夫妻之间共同财产如何定义?
1. 最高人民法院关于适用《婚姻法》若干问题的解释(三)(2011-08-09): 第五条 夫妻一方个人财产在婚后产生的收益,除孳息和自然增值外,应认定为夫妻共同财产。
2. 最高人民法院关于适用《婚姻法》若干问题的解释(二)的补充规定(2017-02-28): 第十九条 由一方婚前承租、婚后用共同财产购买的房屋,房屋权属证书登记在一方名下的,应当认定为夫妻共同财产。
3. 最高人民法院关于适用《婚姻法》若干问题的解释(二)的补充规定(2017-02-28): 第二十二条 当事人结婚前,父母为双方购置房屋出资的,该出资应当认定为对自己子女的个人赠与,但父母明确表示赠与双方的除外。当事人结婚后,父母为双方购置房屋出资的,该出资应当认定为对夫妻双方的赠与,但父母明确表示赠与一方的除外。
> 请问民间借贷的利息有什么限制
1. 合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。
2. 合同法(1999-03-15): 第二百零五条 借款人应当按照约定的期限支付利息。对支付利息的期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定,借款期间不满一年的,应当在返还借款时一并支付;借款期间一年以上的,应当在每届满一年时支付,剩余期间不满一年的,应当在返还借款时一并支付。
3. 最高人民法院关于审理民间借贷案件适用法律若干问题的规定(2020-08-19): 第二十六条 出借人请求借款人按照合同约定利率支付利息的,人民法院应予支持,但是双方约定的利率超过合同成立时一年期贷款市场报价利率四倍的除外。前款所称“一年期贷款市场报价利率”,是指中国人民银行授权全国银行间同业拆借中心自2019年8月20日起每月发布的一年期贷款市场报价利率。
# Usage
```python
from sentence_transformers import SentenceTransformer, LoggingHandler, losses, models, util
from sentence_transformers.util import cos_sim
model_path = "your_model_path"
model = SentenceTransformer(model_path).cuda()
sentence1 = "合同法(1999-03-15): 第二百零六条 借款人应当按照约定的期限返还借款。对借款期限没有约定或者约定不明确,依照本法第六十一条的规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。"
sentence2 = "请问如果借款没还怎么办。"
encoded_sentence1 = model.encode(sentence1)
encoded_sentence2 = model.encode(sentence2)
print(cos_sim(encoded_sentence1, encoded_sentence2))
# tensor([[0.9960]])
```
欢迎引用我们:
```
@misc{cui2023chatlaw,
title={ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases},
author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan},
year={2023},
eprint={2306.16092},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{ChatLaw,
author={Jiaxi Cui and Zongjian Li and Yang Yan and Bohua Chen and Li Yuan},
title={ChatLaw},
year={2023},
publisher={GitHub},
journal={GitHub repository},
howpublished={\url{https://github.com/PKU-YuanGroup/ChatLaw}},
}
```
|
PrarthanaJ/text_2_image_converision
|
PrarthanaJ
| 2023-06-29T05:10:56Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-29T05:10:56Z |
---
license: bigscience-bloom-rail-1.0
---
|
coreml-community/coreml-DreamShaper-v5.0_cn
|
coreml-community
| 2023-06-29T04:57:23Z | 0 | 2 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-27T21:07:28Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet.
<br>
# DreamShaper-v5.0_cn:
Source(s): [Hugging Face](https://huggingface.co/Lykon/DreamShaper) - [CivitAI](https://civitai.com/models/4384/dreamshaper)<br>
## DreamShaper 5
Please check out my newest models: [NeverEnding Dream](https://civitai.com/models/10028/neverending-dream) and [Anime Pastel Dream](https://civitai.com/models/23521/anime-pastel-dream)
Check the version description below for more info and add a ❤️ to receive future updates.
Do you like what I do? Consider supporting me on [Patreon](https://www.patreon.com/Lykon275) 🅿️ to get exclusive tips and tutorials, or feel free to [buy me a coffee](https://ko-fi.com/lykon) ☕
[Live demo available on HuggingFace](https://huggingface.co/spaces/Lykon/DreamShaper-webui) (CPU is slow but free).
Available on [Sinkin.ai](http://sinkin.ai/) and [Smugo](https://smugo.ai/create?model=dreamshaper) with GPU acceleration.
MY MODELS WILL ALWAYS BE FREE<br><br>
**NOTES**
Version 5 is the best at photorealism and has noise offset.
I get no money from any generative service, but you can buy me a coffee.
After a lot of tests I'm finally releasing my mix. This started as a model to make good portraits that do not look like cg or photos with heavy filters, but more like actual paintings. The result is a model capable of doing portraits like I wanted, but also great backgrounds and anime-style characters. Below you can find some suggestions, including LoRA networks to make anime style images.
I hope you'll enjoy it as much as I do.
Diffuser weights (courtesy of [/u/Different-Bet-1686](https://reddit.com/u/Different-Bet-1686)): https://huggingface.co/Lykon/DreamShaper
Official HF repository: https://huggingface.co/Lykon/DreamShaper
Suggested settings:
- I had CLIP skip 2 on pics
- I had ENSD: 31337 for all of them
- All of them had highres.fix
- I don't use restore faces, as it washes out the painting effect
- Version 4 requires no LoRA for anime style.




|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.