modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gokuls/hBERTv1_new_pretrain_48_emb_com_stsb
|
gokuls
| 2023-06-15T13:51:17Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T13:04:46Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.45996385438365645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_emb_com_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9214
- Pearson: 0.4648
- Spearmanr: 0.4600
- Combined Score: 0.4624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5817 | 1.0 | 45 | 2.6028 | 0.2027 | 0.1896 | 0.1962 |
| 2.1023 | 2.0 | 90 | 2.1596 | 0.2035 | 0.1938 | 0.1986 |
| 1.9567 | 3.0 | 135 | 2.3409 | 0.1855 | 0.1931 | 0.1893 |
| 1.7201 | 4.0 | 180 | 2.1790 | 0.2865 | 0.2934 | 0.2899 |
| 1.5153 | 5.0 | 225 | 2.1208 | 0.3381 | 0.3352 | 0.3367 |
| 1.2674 | 6.0 | 270 | 2.1224 | 0.3882 | 0.3898 | 0.3890 |
| 1.0115 | 7.0 | 315 | 2.2253 | 0.4304 | 0.4281 | 0.4293 |
| 0.7449 | 8.0 | 360 | 2.3235 | 0.4236 | 0.4323 | 0.4279 |
| 0.66 | 9.0 | 405 | 2.3617 | 0.4340 | 0.4351 | 0.4346 |
| 0.4678 | 10.0 | 450 | 2.0741 | 0.4300 | 0.4258 | 0.4279 |
| 0.4438 | 11.0 | 495 | 2.3816 | 0.4285 | 0.4294 | 0.4289 |
| 0.3192 | 12.0 | 540 | 2.1673 | 0.4580 | 0.4602 | 0.4591 |
| 0.2481 | 13.0 | 585 | 2.1544 | 0.4392 | 0.4357 | 0.4374 |
| 0.2296 | 14.0 | 630 | 2.0075 | 0.4603 | 0.4582 | 0.4593 |
| 0.1765 | 15.0 | 675 | 2.1395 | 0.4624 | 0.4617 | 0.4621 |
| 0.1533 | 16.0 | 720 | 2.2715 | 0.4512 | 0.4427 | 0.4469 |
| 0.1343 | 17.0 | 765 | 2.1726 | 0.4441 | 0.4417 | 0.4429 |
| 0.1373 | 18.0 | 810 | 2.0223 | 0.4532 | 0.4424 | 0.4478 |
| 0.1277 | 19.0 | 855 | 1.9992 | 0.4395 | 0.4299 | 0.4347 |
| 0.0968 | 20.0 | 900 | 2.1078 | 0.4620 | 0.4601 | 0.4610 |
| 0.084 | 21.0 | 945 | 2.0684 | 0.4627 | 0.4577 | 0.4602 |
| 0.0777 | 22.0 | 990 | 1.9214 | 0.4648 | 0.4600 | 0.4624 |
| 0.0572 | 23.0 | 1035 | 2.0636 | 0.4506 | 0.4422 | 0.4464 |
| 0.0615 | 24.0 | 1080 | 2.0404 | 0.4489 | 0.4388 | 0.4438 |
| 0.0516 | 25.0 | 1125 | 2.0599 | 0.4516 | 0.4435 | 0.4475 |
| 0.0501 | 26.0 | 1170 | 2.0359 | 0.4530 | 0.4489 | 0.4510 |
| 0.0515 | 27.0 | 1215 | 1.9571 | 0.4588 | 0.4508 | 0.4548 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KHEW/OnOffLora
|
KHEW
| 2023-06-15T13:44:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T13:42:48Z |
---
license: creativeml-openrail-m
---
|
edbeeching/falcon-7b-ift-rm-22
|
edbeeching
| 2023-06-15T13:34:47Z | 4 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"region:us"
] | null | 2023-06-15T13:34:45Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: falcon-7b-ift-rm-22
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-ift-rm-22
This model is a fine-tuned version of [HuggingFaceH4/falcon-7b-ift](https://huggingface.co/HuggingFaceH4/falcon-7b-ift) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
- Accuracy: 0.6533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5935 | 1.0 | 2197 | 0.6089 | 0.6533 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
fx1H/dqn-SpaceInvadersNoFrameskip-v4
|
fx1H
| 2023-06-15T13:29:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T13:29:23Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 331.50 +/- 133.44
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fx1H -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fx1H -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fx1H
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SargeZT/velocipedeux
|
SargeZT
| 2023-06-15T13:19:29Z | 38 | 0 |
diffusers
|
[
"diffusers",
"en",
"license:bsd",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T13:02:57Z |
---
license: bsd
language:
- en
---
# Model Card for Velocipedeux
A Stable Diffusion 1.5 model finetuned with v-prediction, zero terminal SNR, and trailing timesteps using a diverse dataset.
## Model Details
### Model Description
This model is a finetune of Stable Diffusion 1.5 that implements Zero Terminal SNR scaling, V-Prediction, and the use of trailing timesteps during training.
This model is in active development and should not be considered final.
|
hangeol/standingdogprompt
|
hangeol
| 2023-06-15T13:19:01Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T11:16:52Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/standingdogprompt
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
woojinSong/DeepFake_MultiModal
|
woojinSong
| 2023-06-15T13:07:08Z | 0 | 0 | null |
[
"ko",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T12:55:27Z |
---
license: apache-2.0
language:
- ko
- en
metrics:
- accuracy
---
|
ygohel18/custom-sdwebui
|
ygohel18
| 2023-06-15T13:06:12Z | 0 | 6 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-06-15T13:03:57Z |
# Stable Diffusion web UI-UX
Not just a browser interface based on Gradio library for Stable Diffusion.
A pixel perfect design, mobile friendly, customizable interface that adds accessibility, ease of use and extended functionallity to the stable diffusion web ui.
Enjoy!
Default theme

## Features of ui-ux
- resizable viewport
- switchable viewports (DoubleClick on the split handler to swap views) option in settings for default position
- mobile navigation
- top header tabs (option setting)
- hidden tabs (option setting) no need to restart this is a different implementation
- drag and drop reordable quick settings offcanvas aside view
- drag and drop images to txt2img and img2img and import generation info parameters along with a preview image
- ignore - remove overrides when import [multiselect] (option setting)
- resizable cards for extra networks and number of rows (option setting)
- lazy loading alternative offcanvas aside view for extra networks (option setting)
- live preview image fit method (option setting)
- generated image fit method (option setting)
- max resolution output for txt2img and img2img (option setting)
- performant dispatch for gradio's range slider and input number field issue: https://github.com/gradio-app/gradio/issues/3204 (option setting) latest update uses only one instance clone to mediate for the release event
- ticks input range sliders (option setting)
- pacman preloader unified colors on reload ui
- frame border animation when generating images
- progress bar on top of the page always visible (when scroll for mobile)
- remix icons
- style theme configurator extension to customize every aspect of theme in real time with cool global functions to change the hue / saturation / brightness or invert the theme colors
- pan and zoom in out functionality for sketch, inpaint, inpaint sketch
- fullscreen support for sketch, inpaint, inpaint sketch
- better lightbox with zoom in-out mobile gestures support etc..
## TODO
- small arrows next to icons sent to inpaint, extras, img2img etc
- component gallery navigate to previous generations inside the txt2img, img2img interface
- and auto load the current generation settings
- credits/about page display all 300+ contributors so far inside the UI
Quick Settings aside off-canvas view - drag and drop to custom sort your settings

Extra Networks aside off-canvas view

Detail img2img sketch view

Theme Configurator - aside off-canvas view

Mobile 395px width

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with --allow-code to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt.
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
-
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui-ux repository, for example by running `git clone https://github.com/anapnoe/stable-diffusion-webui-ux.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/anapnoe/stable-diffusion-webui-ux/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
and replace the path in step 3 with `git clone https://github.com/anapnoe/stable-diffusion-webui-ux`
## Contributing
Here's how to add code to the original repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
Contents/bert-base-uncased-test
|
Contents
| 2023-06-15T12:56:45Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"fill-mask",
"en",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-15T12:50:52Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: fill-mask
datasets:
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
Conj34/LunarLander-v2
|
Conj34
| 2023-06-15T12:54:00Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T12:53:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 128.41 +/- 39.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alialaydrus/kitchen_design
|
alialaydrus
| 2023-06-15T12:45:21Z | 233 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-15T12:45:14Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: kitchen_design
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7053571343421936
---
# kitchen_design
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dining chair

#### dining table

#### sink

#### steel stovetop

#### window

|
PlanTL-GOB-ES/Controversy-Prediction
|
PlanTL-GOB-ES
| 2023-06-15T12:37:54Z | 119 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"spanish",
"natural-language-understanding",
"roberta-base",
"es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-02-22T09:30:35Z |
---
language:
- es
license: apache-2.0
tags:
- "spanish"
- "text-classification"
- "natural-language-understanding"
- "roberta-base"
metrics:
- f1
model-index:
- name: Controversy-Prediction
results:
- task:
name: text-classification
type: text-classification
dataset:
name: meneame_controversy
type: text-classification
config: es-ES
split: test
metrics:
- name: F1
type: f1
value: 0.8472
widget:
- Esposas, hijos, nueras y familiares de altos cargos del PP y de la cúpula universitaria llenan la URJC -- Pedro González-Trevijano, rector de la universidad desde 2002 a 2013, ahora magistrado del Tribunal Constitucional, y su sucesor en el cargo, Fernando Suárez han tejido una red que ha dado cobijo laboral a más de un centenar de familiares de vicerrectores, gerentes o catedráticos en los cuatro campus con los que cuenta la universidad localizados en Alcorcón, Móstoles, Fuenlabrada y Vicálvaro.
---
# Spanish RoBERTa-base finetuned for Controversy Prediction
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **Controversy Prediction** model is a RoBERTa-base model trained of a dataset of news from the platform Menéame annotated with controversy tags in a community-based manner.
## Intended uses and limitations
The **Controversy Prediction** model can be used for controversy prediction in news in Spanish.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("text-classification", model="PlanTL-GOB-ES/Controversy-Prediction")
example = "Esposas, hijos, nueras y familiares de altos cargos del PP y de la cúpula universitaria llenan la URJC -- Pedro González-Trevijano, rector de la universidad desde 2002 a 2013, ahora magistrado del Tribunal Constitucional, y su sucesor en el cargo, Fernando Suárez han tejido una red que ha dado cobijo laboral a más de un centenar de familiares de vicerrectores, gerentes o catedráticos en los cuatro campus con los que cuenta la universidad localizados en Alcorcón, Móstoles, Fuenlabrada y Vicálvaro."
output = nlp(example)
pprint(output)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We use a dataset of news from the [Menéame](www.meneame.net) platform, tagged with controversy labels in a community-based manner. The training set contains 18,270 news, from which 4,950 are controversial. The development set contains 1,058 news, from which 317 are controversial.
### Training procedure
The model was trained with a batch size of 4 and a learning rate of 1e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing the weighted F1 score.
### Evaluation results
We evaluated the Controversy-Prediction model on the Menéame test set obtaining a weighted F1 score of 84.72. The test set contains 1,058 news, from which 317 are controversial.
## Additional information
### Author
Language Technologies Unit at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
</details>
|
dareposte/WizardLM-30b-V1.0-ggml
|
dareposte
| 2023-06-15T12:32:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-13T13:08:18Z |
GGML variant of WizardLM-30b-V1.0, for use on 24GB cards such as 3090.
Update pushed 6/14 that resolved garbage output on llama.cpp, not tested using other tools.
Requires a recent build of llama.cpp that supports the K-quant methods (June or later).
Quant was prepared using llama.cpp build on 6/14/2023.
|
Falah/falahgs_summeriztion_model
|
Falah
| 2023-06-15T12:17:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T11:54:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: falahgs_summeriztion_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1373
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falahgs_summeriztion_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5033
- Rouge1: 0.1373
- Rouge2: 0.0494
- Rougel: 0.1128
- Rougelsum: 0.1128
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7956 | 0.1302 | 0.0369 | 0.1078 | 0.1076 | 19.0 |
| No log | 2.0 | 124 | 2.5837 | 0.1396 | 0.0462 | 0.1144 | 0.1145 | 19.0 |
| No log | 3.0 | 186 | 2.5207 | 0.1365 | 0.0462 | 0.1113 | 0.1111 | 19.0 |
| No log | 4.0 | 248 | 2.5033 | 0.1373 | 0.0494 | 0.1128 | 0.1128 | 19.0 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
pushkin05/LunarLander-v2
|
pushkin05
| 2023-06-15T12:11:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:55:00Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -144.51 +/- 118.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'pushkin05/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
rovargasc/setfit-model_sentencias-v2
|
rovargasc
| 2023-06-15T12:10:39Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-15T12:09:46Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# rovargasc/setfit-model_sentencias-v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("rovargasc/setfit-model_sentencias-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gfalcao/smkfrt7
|
gfalcao
| 2023-06-15T12:05:27Z | 31 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T12:00:24Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### smkfrT7 Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
headlesstech/semantic_xlmr
|
headlesstech
| 2023-06-15T11:56:26Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dpr",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-05T08:17:46Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- dpr
widget:
- source_sentence: "আমি বাংলায় গান গাই"
sentences:
- "I sing in Bangla"
- "I sing in Bengali"
- "I sing in English"
- "আমি গান গাই না "
example_title: "Singing"
---
# `semantic_xlmr`
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like **clustering** or **semantic search**.
<!--- Describe your model here -->
## Model Details
- Model name: semantic_xlmr
- Model version: 1.0
- Architecture: Sentence Transformer
- Language: Multilingual ( fine-tuned for Bengali Language)
## Training
The model was fine-tuned using **Multilingual Knowledge Distillation** method. We took `paraphrase-distilroberta-base-v2` as the teacher model and `xlm-roberta-large` as the student model.

## Intended Use:
- **Primary Use Case:** Semantic similarity, clustering, and semantic searches
- **Potential Use Cases:** Document retrieval, information retrieval, recommendation systems, chatbot systems , FAQ system
## Usage
### Using Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
model = SentenceTransformer('headlesstech/semantic_xlmr')
embeddings = model.encode(sentences)
print(embeddings)
```
### Using HuggingFace Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('headlesstech/semantic_xlmr')
model = AutoModel.from_pretrained('headlesstech/semantic_xlmr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
pushkin05/rl_course_vizdoom_health_gathering_supreme
|
pushkin05
| 2023-06-15T11:45:58Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:28:16Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.43 +/- 5.91
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pushkin05/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Tommert25/robbertfinetuned1506
|
Tommert25
| 2023-06-15T11:44:53Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T09:18:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: robbertfinetuned1506
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbertfinetuned1506
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4020
- Precision: 0.6588
- Recall: 0.5806
- F1: 0.6172
- Accuracy: 0.8828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 73 | 0.5045 | 0.5902 | 0.4871 | 0.5337 | 0.86 |
| No log | 2.0 | 146 | 0.4124 | 0.6161 | 0.5612 | 0.5873 | 0.8772 |
| No log | 3.0 | 219 | 0.3974 | 0.6502 | 0.5683 | 0.6065 | 0.8839 |
| No log | 4.0 | 292 | 0.4020 | 0.6588 | 0.5806 | 0.6172 | 0.8828 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_no_pretrain_wnli
|
gokuls
| 2023-06-15T11:42:37Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T11:26:57Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_no_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_wnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6862
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8468 | 1.0 | 7 | 0.6988 | 0.5634 |
| 0.733 | 2.0 | 14 | 0.8370 | 0.4366 |
| 0.7422 | 3.0 | 21 | 0.7440 | 0.4366 |
| 0.7016 | 4.0 | 28 | 0.7514 | 0.4366 |
| 0.7085 | 5.0 | 35 | 0.7207 | 0.4366 |
| 0.7291 | 6.0 | 42 | 0.6975 | 0.5634 |
| 0.7123 | 7.0 | 49 | 0.6938 | 0.4366 |
| 0.703 | 8.0 | 56 | 0.7073 | 0.4366 |
| 0.714 | 9.0 | 63 | 0.7375 | 0.4366 |
| 0.7049 | 10.0 | 70 | 0.7098 | 0.4366 |
| 0.7036 | 11.0 | 77 | 0.6951 | 0.4366 |
| 0.7061 | 12.0 | 84 | 0.6862 | 0.5634 |
| 0.7034 | 13.0 | 91 | 0.7034 | 0.4366 |
| 0.7052 | 14.0 | 98 | 0.6955 | 0.4366 |
| 0.7028 | 15.0 | 105 | 0.7138 | 0.4366 |
| 0.7064 | 16.0 | 112 | 0.6864 | 0.5634 |
| 0.6953 | 17.0 | 119 | 0.6956 | 0.4507 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
omarfarooq47/whisper-en-tiny-trained
|
omarfarooq47
| 2023-06-15T11:40:11Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-15T11:33:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-en-tiny-trained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-en-tiny-trained
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3475
- Wer: 100.1585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9431 | 1.0 | 60 | 2.3475 | 100.1585 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sofia-todeschini/PubMedELECTRA-Large-LitCovid-v1.0
|
sofia-todeschini
| 2023-06-15T11:39:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T09:59:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: PubMedELECTRA-Large-LitCovid-v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedELECTRA-Large-LitCovid-v1.0
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedELECTRA-large-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedELECTRA-large-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1102
- F1: 0.8974
- Roc Auc: 0.9322
- Accuracy: 0.7942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1183 | 1.0 | 6240 | 0.1102 | 0.8974 | 0.9322 | 0.7942 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Ditrip/ppo-Pyramids
|
Ditrip
| 2023-06-15T11:38:26Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:35:08Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ditrip/ppo-pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/hBERTv1_no_pretrain_stsb
|
gokuls
| 2023-06-15T11:34:39Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T11:16:32Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_no_pretrain_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.21371019463671115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_stsb
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1733
- Pearson: 0.2374
- Spearmanr: 0.2137
- Combined Score: 0.2256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3601 | 1.0 | 60 | 2.6639 | 0.1059 | 0.1080 | 0.1069 |
| 1.9983 | 2.0 | 120 | 2.1733 | 0.2374 | 0.2137 | 0.2256 |
| 1.7079 | 3.0 | 180 | 2.5000 | 0.1872 | 0.1967 | 0.1920 |
| 1.3775 | 4.0 | 240 | 3.1203 | 0.2177 | 0.2251 | 0.2214 |
| 1.1218 | 5.0 | 300 | 2.8260 | 0.2609 | 0.2598 | 0.2603 |
| 0.8882 | 6.0 | 360 | 2.5413 | 0.3099 | 0.3062 | 0.3081 |
| 0.728 | 7.0 | 420 | 2.4024 | 0.3429 | 0.3468 | 0.3448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dappradar/setfit-marketplaces-multilabel
|
dappradar
| 2023-06-15T11:30:25Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T10:55:19Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dappradar/setfit-marketplaces-multilabel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dappradar/setfit-marketplaces-multilabel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Rakoto031/ppo-Huggy
|
Rakoto031
| 2023-06-15T11:24:02Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T11:23:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rakoto031/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
seeeed/opus-mt-en-ro-finetuned-en-to-ro
|
seeeed
| 2023-06-15T11:23:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-15T09:20:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: validation
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1136
- Gen Len: 34.1056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1136 | 34.1056 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
deepgoyal19/disney_lora1
|
deepgoyal19
| 2023-06-15T11:03:04Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-15T07:14:10Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/disney_lora1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.


|
ercanburak/evreal_model
|
ercanburak
| 2023-06-15T10:59:18Z | 0 | 0 | null |
[
"arxiv:2305.00434",
"license:mit",
"region:us"
] | null | 2023-06-15T10:56:03Z |
---
license: mit
---
This is the model card for the EVREAL paper: https://arxiv.org/abs/2305.00434
## Citations
If you use this library in an academic context, please cite the following:
```
@inproceedings{ercan2023evreal,
title={{EVREAL}: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction},
author={Ercan, Burak and Eker, Onur and Erdem, Aykut and Erdem, Erkut},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month={June},
year={2023},
pages={3942-3951}}
```
|
Ditrip/ppo-SnowballTarget
|
Ditrip
| 2023-06-15T10:56:57Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-15T10:56:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ditrip/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SinghManish/audio-classification-model
|
SinghManish
| 2023-06-15T10:53:20Z | 62 | 1 |
transformers
|
[
"transformers",
"tf",
"wav2vec2",
"feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-15T10:52:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: audio-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# audio-classification-model
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
moiduy04/q-Taxi-v3
|
moiduy04
| 2023-06-15T10:33:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T10:32:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="moiduy04/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thackerhelik/rl_course_vizdoom_health_gathering_supreme
|
thackerhelik
| 2023-06-15T10:11:48Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T10:11:40Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.87 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r thackerhelik/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ardigen/ardisplay-i
|
ardigen
| 2023-06-15T10:10:01Z | 1,960 | 4 |
transformers
|
[
"transformers",
"pytorch",
"ardisplay",
"text-classification",
"custom_code",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2023-04-19T11:14:20Z |
---
license: other
tags:
- text-classification
model_name: ardisplay-i
inference: false
---
# Model Card for ARDisplay-I
The model predicts peptide presentation on the cell surface via a given HLA class I molecule.
It was introduced in the paper [Identification of tumor-specific MHC ligands through improved biochemical isolation and incorporation of machine learning][source] by Shima Mecklenbräuker, Piotr Skoczylas, Paweł Biernat, Badeel Zaghla, Bartłomiej Król-Józaga, Maciej Jasiński, Victor Murcia Pienkowski, Anna Sanecka-Duin, Oliver Popp, Rafał Szatanek, Philipp Mertins, Jan Kaczmarczyk, Agnieszka Blum, and Martin G. Klatt.
## Model Details
The peptide-HLA (pHLA) presentation is a major mechanism by which our immune
system can recognize abnormal cells (e.g. altered by cancer or viral infections).
ARDisplay-I predicts whether a given peptide will be displayed on the cell surface
via a given [HLA class I molecule][hla1]. Such a presentation event enables
immunosurveillance, and if the antigen is recognized as non-self, this can trigger
an immune response.
The pHLA presentation itself is a complex multi-stage process composed of
antigen processing followed by its attachment to a particular HLA molecule and
the transportation of the whole pHLA complex to the cell surface. Within
each human cell, proteins are constantly degraded into short peptides or amino
acids. During this process, some protein fragments, typically 8-11 amino-acid
long, may bind to a specific HLA molecule and subsequently be transported to the
cell surface. The predictions from our model encompass the entire processing
and presentation pathway.
> Please note, that in most application scenarios, the model requires additional
> post-processing steps and appropriate filtering. Moreover, if your data is not
> standard (ie. contains neoepitopes, peptides originating from alternative
> splicing, virus epitopes, dark antigens, etc.), you might need additional
> domain knowledge and/or a model fine-tuned to your needs. If necessary, feel free
> to contact us for support.
The model was developed at [Ardigen][ardigen] as part of [Immunology][immuno]
platform. Free access to the regular version is available via the Hugging Face platform
for non-commercial academic use only (see [License][license]). For commercial use and
the Pro model versions, we encourage you to contact us at <[email protected]>.
We invite you to take a look at the full offer
- [immunology][immuno-offer]
- [peptidomics][peptidomics-offer].
### Model Description
- **Developed by:** Ardigen S.A. - AI in Drug Discovery
- **Model type:** Protein Language Model
- **License:** [Other][license]
### Model Sources
- **Demo:** https://huggingface.co/spaces/ardigen/ardisplay-i
## Uses
The model takes peptide-HLA (pHLA) pairs as input and returns a presentation score in the
range between [0, 1]. It can be used to select peptides with the highest
probability of being presented by specific HLA molecules, find protein fragments
with a high presentation probability, find multiple HLAs presenting a given peptide,
or scan an entire protein for presented subsequences.
## Limitations
- Supports a pre-defined set of HLAs.
- Does not work on peptides containing ambiguous amino acids, like X, or J.
- Assumes short peptides on input, limited to between 8 and 11 amino acids.
- Replaces selenocysteine (U) with cysteine (C) before running inference.
## Metrics
### Ardigen's ARDisplay-I with over 2 times higher Average Precision
Our model enables the prediction of HLA-I presented peptides with over 2 times higher Average Precision than the current state-of-the-art (solutions from netMHCpan and MHCflurry).
**The study cohort includes the multiple myeloma cell lines JJN3 and LP-1 as well as the lymphoblastic leukemia cell line Nalm-6. Data consists of the MS results generated by Dr. Philipp Mertins, Martin Klatt, M.D., et al. and describes more than 32,000 HLA ligands presented on the cell surface of one of the three cell lines expressing together 17 distinct HLA class I alleles.**
<div style="text-align:center">
<img src="https://huggingface.co/ardigen/ardisplay-i/resolve/main/documentation_images/benchmark_PR_curves.png" alt="Comparison of precision-recall (PR) curves" width="500"/>
</div>
#### Comparison of precision-recall (PR) curves.
Our model achieves higher results at each point of the
PR curve. The regions with standard deviation do not
overlap, which indicates a high statistical significance
of the performance difference between the methods.
<div style="text-align:center">
<img src="https://huggingface.co/ardigen/ardisplay-i/resolve/main/documentation_images/benchmark_PPVs.png" alt="Positive predictive values (PPV)" width="500"/>
</div>
**Positive predictive values (PPV)** with four selected thresholds, i.e., top-10, 20, 50, & 100 pHLA
pairs selected by each method. For example, PPV (top 10)
is the expected fraction of presented pHLA pairs among
the top 10 pHLAs ranked by the respective model.
Find out more about [Identifying therapeutic targets][peptidomics-offer].
# How to Get Started with the Model
You can visit our [interactive demo][spaces] and try the model there.
Alternatively, you can run the model on your machine from Python as a CLI
tool by following the sections below.
## Huggingface
Install the dependencies
```
pip install -U transformers==4.30.1 torch==1.13.1 tape_proteins==0.5 mhcflurry==2.0.4 mhcgnomes==1.7.0
```
and the auxiliary MHCflurry model for binding affinity prediction
```
mhcflurry-downloads fetch --release 1.7.0 models_class1_pan
```
Use the code below to get started with the model.
```
from transformers import pipeline
pipe = pipeline(model="ardigen/ardisplay-i", trust_remote_code=True)
data = ["A01:02,AAAAAAAA", "A01:02,CCCCCCCCCC"]
result = pipe(data)
print(result)
```
The peptides passed to the model need to have a length between 8 and 11 AAs and cannot
contain ambiguous amino acid descriptors, like X, B, Z, J, etc.
## CLI
You can also install the model as a CLI tool for usage in bioinformatics
pipelines with the following command (assuming you have python3 and pip
installed)
```
wget https://huggingface.co/ardigen/ardisplay-i/raw/main/cli/install.sh -O - | bash
```
This will install the `ardisplay-i-cli` tool which takes a text file with a list
of `HLA,peptide` pairs and outputs a .csv file. See `ardisplay-i-cli --help` for
the details.
## Training Details
The details of model training are proprietary.
[source]: https://www.biorxiv.org/content/10.1101/2023.06.08.544182v1
[hla1]: https://en.wikipedia.org/wiki/Human_leukocyte_antigen
[ardigen]: https://ardigen.com
[immuno]: https://ardigen.com/immunology
[spaces]: https://huggingface.co/spaces/ardigen/ardisplay-i
[immuno-offer]: https://ai.ardigen.com/ardigen-immunology-offer
[peptidomics-offer]: https://ai.ardigen.com/ardigens-ardentify-platform
[license]: https://huggingface.co/ardigen/ardisplay-i/blob/main/LICENSE.md
|
Felix92/doctr-dummy-tf-vitstr-small
|
Felix92
| 2023-06-15T09:53:52Z | 2 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-06-15T09:53:47Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
gokuls/hBERTv1_no_pretrain_rte
|
gokuls
| 2023-06-15T09:38:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T10:25:42Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_no_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_no_pretrain_rte
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6919
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7561 | 1.0 | 26 | 0.6977 | 0.4729 |
| 0.7108 | 2.0 | 52 | 0.7333 | 0.4729 |
| 0.7378 | 3.0 | 78 | 0.6919 | 0.5271 |
| 0.7045 | 4.0 | 104 | 0.7052 | 0.5271 |
| 0.7077 | 5.0 | 130 | 0.7034 | 0.5271 |
| 0.6816 | 6.0 | 156 | 0.7515 | 0.5343 |
| 0.6692 | 7.0 | 182 | 0.7616 | 0.5235 |
| 0.5846 | 8.0 | 208 | 0.9617 | 0.4838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Outimus/ti
|
Outimus
| 2023-06-15T09:35:43Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-23T08:27:26Z |
---
license: creativeml-openrail-m
---
|
tux/q-Taxi-v3-2
|
tux
| 2023-06-15T09:21:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:17:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shouqiangli/test001
|
shouqiangli
| 2023-06-15T09:19:30Z | 0 | 0 |
allennlp
|
[
"allennlp",
"finance",
"text-classification",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] |
text-classification
| 2023-06-15T08:50:33Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: allennlp
pipeline_tag: text-classification
tags:
- finance
---
|
anilsekharc/ppo-Huggy
|
anilsekharc
| 2023-06-15T09:19:21Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:19:17Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: anilsekharc/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fx1H/q-Taxi-v3-v2
|
fx1H
| 2023-06-15T09:17:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:17:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fx1H/q-Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tux/q-Taxi-v3-1
|
tux
| 2023-06-15T09:16:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:11:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/q-Taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tux/default_taxi
|
tux
| 2023-06-15T09:04:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T09:04:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: default_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tux/default_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fx1H/q-Taxi-v3
|
fx1H
| 2023-06-15T08:54:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T08:54:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fx1H/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
XiaHan19/shami-token
|
XiaHan19
| 2023-06-15T08:42:08Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T14:12:26Z |
---
license: apache-2.0
---
Shami Tokenizer
===
This is the tokenizer of Shami Model.
|
fx1H/q-FrozenLake-v1-4x4-noSlippery
|
fx1H
| 2023-06-15T08:34:14Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T08:34:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fx1H/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/v3_a
|
casque
| 2023-06-15T08:29:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T05:08:07Z |
---
license: creativeml-openrail-m
---
|
LemonFace0309/a2c-AntBulletEnv-v0
|
LemonFace0309
| 2023-06-15T08:12:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T08:11:06Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 914.54 +/- 227.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/sa_BERT_no_pretrain_wnli
|
gokuls
| 2023-06-15T08:08:30Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T14:36:04Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_wnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6866
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0074 | 1.0 | 7 | 0.6958 | 0.4366 |
| 0.6986 | 2.0 | 14 | 0.7035 | 0.4366 |
| 0.7007 | 3.0 | 21 | 0.6866 | 0.5634 |
| 0.7052 | 4.0 | 28 | 0.7037 | 0.4366 |
| 0.7008 | 5.0 | 35 | 0.6951 | 0.4366 |
| 0.7107 | 6.0 | 42 | 0.6908 | 0.5634 |
| 0.6963 | 7.0 | 49 | 0.6945 | 0.4366 |
| 0.7012 | 8.0 | 56 | 0.6894 | 0.5634 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/sa_BERT_no_pretrain_stsb
|
gokuls
| 2023-06-15T08:03:24Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T14:26:57Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: sa_BERT_no_pretrain_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.12459536879199183
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_stsb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5396
- Pearson: 0.1394
- Spearmanr: 0.1246
- Combined Score: 0.1320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.257 | 1.0 | 60 | 3.1111 | 0.0528 | 0.0709 | 0.0619 |
| 2.0476 | 2.0 | 120 | 2.5396 | 0.1394 | 0.1246 | 0.1320 |
| 1.8905 | 3.0 | 180 | 2.5928 | 0.1553 | 0.1593 | 0.1573 |
| 1.5383 | 4.0 | 240 | 3.1130 | 0.1930 | 0.2086 | 0.2008 |
| 1.3384 | 5.0 | 300 | 2.8651 | 0.1788 | 0.2014 | 0.1901 |
| 1.1299 | 6.0 | 360 | 2.9651 | 0.1818 | 0.1947 | 0.1883 |
| 1.0952 | 7.0 | 420 | 2.6404 | 0.2100 | 0.2124 | 0.2112 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Lajonbot/LaMini-Cerebras-256M-8500-steps-polish
|
Lajonbot
| 2023-06-15T07:58:02Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:databricks/databricks-dolly-15k",
"dataset:s3nh/alpaca-dolly-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-29T19:16:29Z |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
- s3nh/alpaca-dolly-instruction-only-polish
language:
- pl
---
### Introduction
These repository consist of LaMini-Cerebras-256M finetuned to Polish language on translated alpaca-dolly dataset.
Main task is to perform accurate answers to instruction asked.
Below you can find an instruction of how to infer with that model.
These repository does not contain an tokenizer object, at the moment (#TODO).
### Evaluation part
```python
import pandas as pd
import torch
from torch.utils.data import AutTokenizer
from typing import List, Dict, Union
from typing import Any, TypeVar
import pandas as pd
import pickle
MODEL_NAME: str = 'Lajonbot/LaMini-Cerebras-256M-8500-steps-polish'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCasualLM.from_pretrained(MODEL_NAME).cuda()
#Resize model for tokenizer size
n_tokens: int = len(tokenizer)
model.resize_token_embeddings(n_tokens)
def _generate_prompt(instruction, input=None):
if input:
return f"""Poniżej znajduje się instrukcja opisująca zadanie, połączona z danymi wejściowymi, które zapewniają dalszy konktekst. Napisz odpowiedź, która odpowiednio odpowie na pytanie.
### Instruction:
{instruction}
### Input:
{input}
### Response:"""
manual_instruction: str = "Napisz mi proszę jakie są rodzaje telefonów komórkowych"
manual_input: str = "Telefony komórkowe, w przeciwieństwie do np. satelitarnych, charakteryzuje to, że działają w obrębie naziemnych fal radiowych w technologii GSM (i w różnych jej wariantach: 3G, 4G czy niebawem 5G). Zasadniczo można jednak wyróżnić wiele ich rodzajów i podzielić je na różne kryteria. I tak, ze względu na rodzaj obudowy, można mówić o telefonach jednobryłowych, rozsuwanych, obrotowych czy też z klapką. Obecnie jednak najbardziej popularne i – ze względu na posiadane parametry – najlepsze telefony komórkowe to smartfony dotykowe."
print(f"Valueation for {manual_instruction} \n\n\n {manual_input}\n\n")
evaluate(instruction = manual_instruction,
input = manual_input)
```
|
casque/majicmixLux_v2
|
casque
| 2023-06-15T07:51:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T07:34:10Z |
---
license: creativeml-openrail-m
---
|
umarsajjad1992/distilbert-base-uncased-finetuned-emotion
|
umarsajjad1992
| 2023-06-15T07:49:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"en",
"dataset:dair-ai/emotion",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T04:07:09Z |
---
datasets:
- dair-ai/emotion
language:
- en
metrics:
- f1
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
|
gokuls/sa_BERT_no_pretrain_sst2
|
gokuls
| 2023-06-15T07:48:32Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T13:35:42Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: sa_BERT_no_pretrain_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8027522935779816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4637
- Accuracy: 0.8028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4863 | 1.0 | 702 | 0.4747 | 0.7890 |
| 0.2723 | 2.0 | 1404 | 0.4974 | 0.7901 |
| 0.2219 | 3.0 | 2106 | 0.4637 | 0.8028 |
| 0.1848 | 4.0 | 2808 | 0.7501 | 0.7833 |
| 0.1591 | 5.0 | 3510 | 0.5357 | 0.8005 |
| 0.1346 | 6.0 | 4212 | 0.5450 | 0.7833 |
| 0.1148 | 7.0 | 4914 | 0.8002 | 0.7741 |
| 0.1034 | 8.0 | 5616 | 0.8853 | 0.7821 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
prognosis/falcon40b-chunks-10k-v1_1
|
prognosis
| 2023-06-15T07:47:29Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T04:30:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon40b-chunks-10k-v1_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon40b-chunks-10k-v1_1
This model is a fine-tuned version of [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
DAMO-NLP-SG/mt-llama-7b-delta
|
DAMO-NLP-SG
| 2023-06-15T07:21:35Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T02:52:02Z |
---
license: mit
---
# MT-LLaMA Model Card
## Model details
**Model type:**
MT-LLaMA is an open-source multi-task model trained by fine-tuning LLaMA on the massive tasks in [P3](https://huggingface.co/datasets/bigscience/P3) (i.e., T0 Train). Concretely, the used datasets during training and task taxonomy are listed below:
* Multi-choice QA: CommonsenseQA, Cosmos QA, DREAM, QuAIL, QuaRTz, QASC, QuaRel, SciQ, Social IQA, Wiki Hop, WiQA
* Extractive QA: Adversarial QA, DuoRC, Quoref, ROPES
* Close-Book QA: Hotpot QA, Wiki QA
* Sentiment Classification: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp
* Topic Classification: AG News, DBPedia, TREC
* Structure-to-Text Generation: Common Gen, Wiki Bio
* Text Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum
* Paraphrase Identification: MRPC, PAWS, QQP
**Organizations developing the model:**
The MT-LLaMA team with members from Alibaba Damo Academy and the Chinese University of Hong Kong.
## Intended use
You can try the codes from our [github repo](https://github.com/DAMO-NLP-SG/MT-LLaMA).
## Zero-shot Evaluation
We primarily follow the protocols of [Bigscience T0](https://openreview.net/forum?id=9Vrb9D0WI4) to assess the generalization capability of our Multi-task LLaMA to: (1) _**Unseen Datasets**_ (i.e., datasets from seen tasks); (2) _**Unseen Tasks**_.
#### Prompt Format
Extractive QA:
1. XQuAD, TyDiQA, MLQA, SQuAD
```angular2html
Input: Answer the question according to the context. Question: ${question}. Context: ${context}. Answer:
Output: ${Answer}
```
Sentiment:
1. SST-2
```angular2html
Input: ${sentence} Based on this review, would the user recommend this product? No or Yes?
Output: Yes / No
```
Multiple-Choice QA:
1. OpenbookQA
```angular2html
Input: ${question} Which is the correct answer? - (A) ${choiceA} - (B) ${choiceB} - (C) ${choiceC} - (D) ${choiceD}
Output: ${choiceA} / ${choiceB} / ${choiceC} / ${choiceD}
```
Sentence Completion:
1. COPA
```angular2html
Input: ${premise} {% if question == "cause" %} This happened because... {% else %} As a consequence... Help me pick the more plausible option: - ${text1} - ${text2}
Output: ${text1} / ${text2}
```
Coreference Resolution:
1. Winogrande:
```angular2html
Input: ${sentence} In the previous sentence, does _ refer to ${option1} or ${option2}?
Output: ${option1} / ${option2}
```
Word Sense Disambiguation:
1. WiC
```angular2html
Input: Does the word "${word}" have the same meaning in these two sentences? Yes, No? ${sentence1} ${sentence2}
Output: ${sentence1} / ${sentence2}
```
Natural Language Inference:
1. MNLI:
```angular2html
Input: ${premise} Question: Does this imply that ${hypothesis}? Please response with 'Yes', 'No', or 'Maybe'.
Output: Yes / No / Maybe
```
2. RTE
```angular2html
Input: Given ${premise} Is it guaranteed true that "${hypothesis}"? Yes or no?
Output: Yes / no
```
#### Results on _Unseen Datasets_
| Model | XQuAD-en (F1/EM) | TyDiQA-en (F1/EM) | MLQA-en (F1/EM) | SQuAD (F1/EM) | SST-2 (Acc.) | OpenbookQA (Acc.) |
|:------------|------------------|-------------------|-----------------|---------------|--------------|-------------------|
| LLaMA-7b | 9.5 / 2.0 | 14.3 / 2.6 | 13.4 / 3.3 | 29.4 / 11.5 | 50.5 | 32.4 |
| MT-LLaMA-7b | 42.3 / 31.1 | 38.9 / 26.9 | 45.4 / 31.5 | 85.9 / 77.6 | 92.6 | 38.2 |
#### Results on _Unseen Tasks_
| Model | COPA (Acc.) | Winogrande (Acc.) | WiC (Acc.) | MNLI (Acc.) | RTE (Acc.) |
|:------------|-------------|--------------------|------------|-------------|------------|
| LLaMA-7b | 56.0 | 49.3 | 51.7 | 30.2 | 52.7 |
| MT-LLaMA-7b | 88.0 | 54.9 | 52.2 | 49.6 | 79.1 |
## Acknowledgement
* Our training codes are largely borrowed from [FastChat](https://github.com/lm-sys/FastChat)
* We are also grateful for the efforts of [LLaMA](https://github.com/facebookresearch/llama) (from FAIR) and [T0](https://github.com/bigscience-workshop/t-zero) (from BigScience), which serve as the foundation of our work
If you find this resource useful, please cite the repo as follows:
```
@software{damonlpsg2023mtllama,
author = {Xu, Weiwen and Li, Xin and Bing, Lidong},
title = {Multi-task Instruction-tuned LLaMA},
year = 2023,
url = {https://github.com/DAMO-NLP-SG/MT-LLaMA}
}
```
|
Janxxx/Kokoroface
|
Janxxx
| 2023-06-15T06:43:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T06:41:58Z |
---
license: creativeml-openrail-m
---
|
jorgeortizfuentes/spanish-spellchecker-flan-t5-large_3e
|
jorgeortizfuentes
| 2023-06-15T06:11:40Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"es",
"esc",
"dataset:jorgeortizfuentes/spanish_spellchecker_train",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-13T05:15:42Z |
---
language:
- es
- esc
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- jorgeortizfuentes/spanish_spellchecker_train
model-index:
- name: spanish-spellchecker-flan-t5-large_3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-spellchecker-flan-t5-large_3e
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the jorgeortizfuentes/spanish_spellchecker_train dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
DionnisB/Models
|
DionnisB
| 2023-06-15T06:06:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-19T21:38:57Z |
---
license: creativeml-openrail-m
---
|
AustinCarthy/MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-15T05:44:40Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T03:32:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0273
- Accuracy: 0.9976
- F1: 0.9738
- Precision: 0.9989
- Recall: 0.9498
- Roc Auc Score: 0.9749
- Tpr At Fpr 0.01: 0.9544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0052 | 1.0 | 21554 | 0.0150 | 0.9972 | 0.9692 | 0.9979 | 0.9422 | 0.9711 | 0.9408 |
| 0.0046 | 2.0 | 43108 | 0.0107 | 0.9979 | 0.9778 | 0.9954 | 0.9608 | 0.9803 | 0.9458 |
| 0.002 | 3.0 | 64662 | 0.0198 | 0.9973 | 0.9708 | 0.9983 | 0.9448 | 0.9724 | 0.9488 |
| 0.0005 | 4.0 | 86216 | 0.0243 | 0.9974 | 0.9721 | 0.9977 | 0.9478 | 0.9738 | 0.947 |
| 0.0 | 5.0 | 107770 | 0.0273 | 0.9976 | 0.9738 | 0.9989 | 0.9498 | 0.9749 | 0.9544 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gameofdimension/rl_course_vizdoom_health_gathering_supreme
|
gameofdimension
| 2023-06-15T05:43:29Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T05:43:20Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.56 +/- 3.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r felixdae/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
gokuls/sa_BERT_no_pretrain_qqp
|
gokuls
| 2023-06-15T05:40:30Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T07:55:56Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: sa_BERT_no_pretrain_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.7934207271827851
- name: F1
type: f1
value: 0.6836123948783999
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_BERT_no_pretrain_qqp
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4355
- Accuracy: 0.7934
- F1: 0.6836
- Combined Score: 0.7385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5241 | 1.0 | 3791 | 0.4947 | 0.7638 | 0.6550 | 0.7094 |
| 0.4527 | 2.0 | 7582 | 0.4524 | 0.7853 | 0.7027 | 0.7440 |
| 0.404 | 3.0 | 11373 | 0.4355 | 0.7934 | 0.6836 | 0.7385 |
| 0.3675 | 4.0 | 15164 | 0.4407 | 0.8038 | 0.7438 | 0.7738 |
| 0.3315 | 5.0 | 18955 | 0.4426 | 0.8060 | 0.7368 | 0.7714 |
| 0.3031 | 6.0 | 22746 | 0.4437 | 0.8067 | 0.7444 | 0.7755 |
| 0.2747 | 7.0 | 26537 | 0.4359 | 0.8046 | 0.7523 | 0.7785 |
| 0.2441 | 8.0 | 30328 | 0.4718 | 0.8074 | 0.7547 | 0.7811 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
octipuw/RLCourse
|
octipuw
| 2023-06-15T05:23:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T01:30:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PP)
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.56 +/- 24.11
name: mean_reward
verified: false
---
# **PP)** Agent playing **LunarLander-v2**
This is a trained model of a **PP)** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Manirathinam21/M-Bert-base-cased-language-detection
|
Manirathinam21
| 2023-06-15T05:09:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T08:10:30Z |
---
license: apache-2.0
---
# Multilingual_Language_Detection
This model is a fine-tuned version of [BERT-multilingual-base-(cased)](https://huggingface.co/bert-base-multilingual-cased) on the multilingual dataset.
It achieves the following results on the evaluation set:
- Training Loss : 0.018700
- Validation Loss : 0.054768
- Accuracy : 0.988864
- F1 : 0.988909
## Languages
It's trained in more than 22 different languages, they are listed below.
Arabic, Urdu, Tamil, Hindi, English, French, Spanish, Japanese, Chinese, Thai, Indonesian, Dutch, Korean, Latin, Persian, Portugese,
Pushto, Romanian, Russian, Swedish, Turkish, Estonian
## Model Description
The BERT model was pretrained on the 104 languages with the largest Wikipedias using a masked language modeling (MLM) objective. This model is case sensitive: it makes a difference between english and English.
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
## Training procedure
Fine-tuning was done via the `Trainer` API. Here is the [Colab notebook](https://colab.research.google.com/drive/1t-IKdGNlroc_-I2QmAPs4b3nHunpLMrn?usp=sharing) with the training code.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 8
- eval_batch_size: 16
- optimizer: Adam
- evaluation strategy: epoch
- num_epochs: 3
- warmup_steps: 100
## Training result
| Training Loss | Epoch | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:---------------:|:--------:|:------:|
| 0.002800 | 1 | 0.081287 | 0.9888 | 0.9890 |
| 0.001100 | 2 | 0.064479 | 0.9897 | 0.9898 |
| 0.018700 | 3 | 0.054768 | 0.9888 | 0.9889 |
|
gameofdimension/ppo-cleanrl-LunarLander-v2
|
gameofdimension
| 2023-06-15T04:57:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T04:03:33Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -23.11 +/- 120.73
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'felixdae/ppo-cleanrl-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Gayu/bert-finetuned-ner
|
Gayu
| 2023-06-15T04:45:11Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-11T15:05:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9340732151730993
- name: Recall
type: recall
value: 0.9490070683271625
- name: F1
type: f1
value: 0.9414809249519994
- name: Accuracy
type: accuracy
value: 0.9859598516512628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9341
- Recall: 0.9490
- F1: 0.9415
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0895 | 1.0 | 1756 | 0.0694 | 0.9148 | 0.9337 | 0.9241 | 0.9823 |
| 0.0345 | 2.0 | 3512 | 0.0657 | 0.9279 | 0.9488 | 0.9383 | 0.9854 |
| 0.0185 | 3.0 | 5268 | 0.0620 | 0.9341 | 0.9490 | 0.9415 | 0.9860 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
francoyisususu/domo
|
francoyisususu
| 2023-06-15T04:28:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T04:28:00Z |
---
license: creativeml-openrail-m
---
|
peteozegov/a2c-PandaReachDense-v2
|
peteozegov
| 2023-06-15T04:26:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-05T03:30:25Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.90 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
octipuw/uni1LunarLander
|
octipuw
| 2023-06-15T04:22:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T04:20:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.32 +/- 16.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pushkin05/CartPole-v1
|
pushkin05
| 2023-06-15T04:18:34Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T04:18:17Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gokuls/hBERTv2_new_pretrain_48_KD_wnli
|
gokuls
| 2023-06-15T04:10:46Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T04:06:07Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_KD_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6863
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9584 | 1.0 | 5 | 0.7296 | 0.5634 |
| 0.7454 | 2.0 | 10 | 0.6897 | 0.5352 |
| 0.7071 | 3.0 | 15 | 0.7028 | 0.4366 |
| 0.6975 | 4.0 | 20 | 0.6943 | 0.4366 |
| 0.6981 | 5.0 | 25 | 0.6863 | 0.5634 |
| 0.7028 | 6.0 | 30 | 0.7001 | 0.4366 |
| 0.699 | 7.0 | 35 | 0.6905 | 0.5634 |
| 0.7045 | 8.0 | 40 | 0.6892 | 0.5634 |
| 0.7113 | 9.0 | 45 | 0.6996 | 0.4366 |
| 0.6966 | 10.0 | 50 | 0.6924 | 0.5634 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Laurie/qlora-v1
|
Laurie
| 2023-06-15T04:10:06Z | 0 | 0 | null |
[
"en",
"dataset:tatsu-lab/alpaca",
"license:apache-2.0",
"region:us"
] | null | 2023-06-01T10:15:29Z |
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
---
model_name_or_path = yahma/llama-13b-hf
learning_rate = 1e-4
max_steps = 10000
epoch = 3.08
train_loss = 0.7169
train_runtime = 1 day, 7:23:32.67
train_samples_per_second = 1.416
train_steps_per_second = 0.088
|
Masyriq1/YGY1
|
Masyriq1
| 2023-06-15T04:07:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-27T22:16:28Z |
---
license: creativeml-openrail-m
---
|
underfit/nli-finetune-model
|
underfit
| 2023-06-15T04:06:38Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:multi_nli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-12T03:22:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_nli
metrics:
- accuracy
model-index:
- name: nli-finetune-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: multi_nli
type: multi_nli
config: default
split: validation_matched
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7793333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-finetune-model
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the multi_nli dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2551
- Accuracy: 0.7793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6813 | 1.0 | 2500 | 0.6655 | 0.7657 |
| 0.5632 | 2.0 | 5000 | 1.0409 | 0.778 |
| 0.3753 | 3.0 | 7500 | 1.2551 | 0.7793 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_KD_stsb
|
gokuls
| 2023-06-15T04:05:48Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T03:55:09Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_48_KD_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.23740661914182784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2142
- Pearson: 0.2484
- Spearmanr: 0.2374
- Combined Score: 0.2429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.3019 | 1.0 | 45 | 2.6041 | 0.1631 | 0.1541 | 0.1586 |
| 2.1689 | 2.0 | 90 | 2.4035 | 0.1451 | 0.1550 | 0.1500 |
| 1.8521 | 3.0 | 135 | 2.2142 | 0.2484 | 0.2374 | 0.2429 |
| 1.6095 | 4.0 | 180 | 2.6155 | 0.2618 | 0.2566 | 0.2592 |
| 1.3325 | 5.0 | 225 | 2.9218 | 0.3160 | 0.3098 | 0.3129 |
| 1.0639 | 6.0 | 270 | 2.2315 | 0.3486 | 0.3476 | 0.3481 |
| 0.8434 | 7.0 | 315 | 2.5364 | 0.3459 | 0.3343 | 0.3401 |
| 0.7193 | 8.0 | 360 | 2.3979 | 0.3661 | 0.3633 | 0.3647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nbiish/ppo-Huggy
|
nbiish
| 2023-06-15T04:00:46Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-15T04:00:41Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nbiish/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nolanaatama/nythngv45
|
nolanaatama
| 2023-06-15T03:53:40Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T03:42:59Z |
---
license: creativeml-openrail-m
---
|
gokuls/hBERTv2_new_pretrain_48_KD_qqp
|
gokuls
| 2023-06-15T03:51:40Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T19:32:40Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_KD_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8244867672520405
- name: F1
type: f1
value: 0.7649085608269283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_KD_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48_KD](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48_KD) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4160
- Accuracy: 0.8245
- F1: 0.7649
- Combined Score: 0.7947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5305 | 1.0 | 2843 | 0.4755 | 0.7733 | 0.6749 | 0.7241 |
| 0.4434 | 2.0 | 5686 | 0.4351 | 0.7950 | 0.7260 | 0.7605 |
| 0.3858 | 3.0 | 8529 | 0.4173 | 0.8063 | 0.7210 | 0.7637 |
| 0.3409 | 4.0 | 11372 | 0.4201 | 0.7998 | 0.7579 | 0.7788 |
| 0.303 | 5.0 | 14215 | 0.4274 | 0.8169 | 0.7577 | 0.7873 |
| 0.2727 | 6.0 | 17058 | 0.4403 | 0.8186 | 0.7638 | 0.7912 |
| 0.2449 | 7.0 | 19901 | 0.4160 | 0.8245 | 0.7649 | 0.7947 |
| 0.2241 | 8.0 | 22744 | 0.4429 | 0.8271 | 0.7518 | 0.7895 |
| 0.2054 | 9.0 | 25587 | 0.4941 | 0.8287 | 0.7659 | 0.7973 |
| 0.1905 | 10.0 | 28430 | 0.4992 | 0.8264 | 0.7744 | 0.8004 |
| 0.1768 | 11.0 | 31273 | 0.4901 | 0.8303 | 0.7686 | 0.7995 |
| 0.1655 | 12.0 | 34116 | 0.5513 | 0.8276 | 0.7446 | 0.7861 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pushkin05/dqn-SpaceInvadersNoFrameskip-v4
|
pushkin05
| 2023-06-15T03:42:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-14T06:39:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 678.00 +/- 223.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pushkin05 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pushkin05 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pushkin05
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-06-15T03:31:36Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-15T00:04:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0209
- Accuracy: 0.9976
- F1: 0.9738
- Precision: 0.9987
- Recall: 0.95
- Roc Auc Score: 0.9750
- Tpr At Fpr 0.01: 0.9556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0038 | 1.0 | 35625 | 0.0193 | 0.9966 | 0.9635 | 0.9987 | 0.9306 | 0.9653 | 0.9382 |
| 0.0019 | 2.0 | 71250 | 0.0200 | 0.9969 | 0.9661 | 0.9989 | 0.9354 | 0.9677 | 0.9436 |
| 0.001 | 3.0 | 106875 | 0.0207 | 0.9973 | 0.9711 | 0.9989 | 0.9448 | 0.9724 | 0.9474 |
| 0.0007 | 4.0 | 142500 | 0.0195 | 0.9977 | 0.9749 | 0.9987 | 0.9522 | 0.9761 | 0.9558 |
| 0.0005 | 5.0 | 178125 | 0.0209 | 0.9976 | 0.9738 | 0.9987 | 0.95 | 0.9750 | 0.9556 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_no_pretrain_wnli
|
gokuls
| 2023-06-15T03:24:43Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T12:16:44Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_no_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_no_pretrain_wnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6874
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9765 | 1.0 | 5 | 0.6952 | 0.4366 |
| 0.723 | 2.0 | 10 | 0.6938 | 0.4648 |
| 0.7209 | 3.0 | 15 | 0.6902 | 0.5634 |
| 0.7183 | 4.0 | 20 | 0.7155 | 0.5634 |
| 0.7155 | 5.0 | 25 | 0.6875 | 0.5634 |
| 0.7027 | 6.0 | 30 | 0.6978 | 0.4366 |
| 0.6966 | 7.0 | 35 | 0.7161 | 0.4366 |
| 0.7077 | 8.0 | 40 | 0.6926 | 0.5634 |
| 0.7048 | 9.0 | 45 | 0.7409 | 0.4366 |
| 0.7386 | 10.0 | 50 | 0.6874 | 0.5634 |
| 0.7104 | 11.0 | 55 | 0.6875 | 0.5634 |
| 0.7061 | 12.0 | 60 | 0.7088 | 0.4366 |
| 0.6951 | 13.0 | 65 | 0.7009 | 0.4507 |
| 0.6995 | 14.0 | 70 | 0.7050 | 0.4366 |
| 0.692 | 15.0 | 75 | 0.6976 | 0.3521 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ClueAI/ChatYuan-7B
|
ClueAI
| 2023-06-15T03:21:01Z | 5 | 10 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"zh",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T07:34:54Z |
---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
ChatYuan-7B是一个支持中英双语的功能型对话语言大模型。它是基于LLama-7B模型上继续进行三阶段训练的模型。
三阶段如下:
1. 在中文通用语料上继续预训练500亿中文token
2. 在数百种任务集上进行任务式指令微调训练
3. 在数百万人类反馈数据集上进行指令微调训练
## 更多细节参考[GitHub](https://github.com/clue-ai/ChatYuan-7B)
## 使用方式
为了遵守LLaMA模型许可证,我们将ChatYuan-7B权重发布为增量权重。您可以将我们的增量权重与原始的LLaMA权重相加,得到ChatYuan-7B权重。
1. 通过原始[LLaMA-7B](https://github.com/facebookresearch/llama)生成LLaMA的hf模型(LLaMA-7B-HF),可以参考[指导](https://huggingface.co/docs/transformers/main/model_doc/llama), 也可以直接使用[llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf)
2. 合并LLaMA-7B的hf模型和ChatYuan-7B模型成ChatYuan-7B-merge
### 合并脚本
[apply_delta.py](https://huggingface.co/ClueAI/ChatYuan-7B/blob/main/apply_delta.py)
```shell
python3 apply_delta.py --base ~/model_weights/LLaMA-7B-HF --delta ~/model_weights/ChatYuan-7B --target ~/model_weights/ChatYuan-7B-merge
```
## 加载方式
```python
from transformers import LlamaForCausalLM, AutoTokenizer
import torch
import sys
ckpt = "~/model_weights/ChatYuan-7B-merge"
device = torch.device('cuda')
model = LlamaForCausalLM.from_pretrained(ckpt)
tokenizer = AutoTokenizer.from_pretrained(ckpt)
```
## 推理方式
```python
def answer(prompt):
prompt = f"用户:{prompt}\n小元:"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024, do_sample = True, temperature = 0.7)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = output[len(prompt):]
return response
```
## 示例
```
input: 学前教育专业岗位实习中,在学生方面会存在问题,请提出改进措施。800字
output: 作为学前教育专业的学生,在岗位实习中可能会遇到各种各样的问题,这些问题不仅影响了我们的学习和成长,也影响了我们的教育质量。为了帮助学生更好地完成实习工作,我们提出以下改进措施:
一、提高教育质量
在学前教育专业岗位实习中,教育质量是关键。学生应该掌握教育方法和技能,并且在实践中学习。为了提高教育质量,学校应该从以下几个方面着手:
1. 提供高质量的教育资源。学校应该提供各种教育资源,包括教材、工具和软件等,以帮助学生更好地完成实习工作。
2. 提高教师质量。教师应该具备丰富的教育经验和专业知识,以便更好地指导学生。
3. 提供培训和评估。学校应该为学生提供培训和评估,以帮助学生更好地掌握技能和知识。
二、加强沟通与合作
在岗位实习中,学生应该与老师和同学进行沟通和合作。老师应该与学生保持良好的沟通,以便更好地指导学生。同时,学生也应该与同学进行合作,以便更好地完成实习工作。
为了加强沟通与合作,学校应该从以下几个方面着手:
1. 提供交流渠道。学校应该为学生提供交流渠道,包括线上和线下的交流方式,以便学生与老师和同学之间进行良好的沟通。
2. 提供团队合作机会。学校应该为学生提供团队合作机会,以便学生能够更好地了解和协作。
3. 提供培训和指导。学校应该为学生提供培训和指导,以便学生能够更好地理解和应对问题。
三、保证实习安全
在岗位实习中,学生应该保证自身的安全。学校应该为学生提供安全保障,包括在实习过程中提供帮助和指导,以及在紧急情况下提供支持和帮助。
为了保证实习安全,学校应该从以下几个方面着手:
1. 提供安全教育。学校应该为学生提供安全教育,以便学生能够更好地了解安全要求和安全方法。
2. 提供安全指导。学校应该为学生提供安全指导,以便学生能够更好地掌握实习安全方法。
3. 提供安全保障。学校应该为学生提供安全保障,包括在实习过期中提供帮助和指导,以及在紧急情况下提予支持和帮助。
总之,学前教育专业岗位实习是学生成长和发展的重要阶段。通过提高教育质量、加强沟通与合作和保证实习安全等措施,学校可以帮助学生更好地完成实习工作,提高教育质量和专业技能。
```
```
input: 根据标题生成文章:标题:屈臣氏里的化妆品到底怎么样?正文:化妆品,要讲究科学运用,合理搭配。屈臣氏起码是正品连锁店。请继续后面的文字。
output: 作为一家有着数十年历史的正品连锁店,屈臣氏在市场上拥有一定的口碑和信誉。而对于喜欢化妆品的女性来说,屈臣氏里的化妆品也是不可忽视的选择。那么,屈臣氏里的化妆品到底怎么样呢?
首先,我们需要了解一下屈臣氏的采购政策。屈臣氏一直秉持着“质量第一、信誉商誉、顾客至上”的经营理念,严格按照国家的相关质量标准进行采购管理。在采购过程中,屈臣氏会对每一家供应商进行详细的评估和审核,以确保所采购的商品符合质量标准。
其次,我们需要了解一下屈臣氏销售的化妆品品牌。屈臣氏销售的化妆品品牌非常丰富,包括国内外著名品牌如兰蔻、雅诗兰黛、娇兰、资生堂等。屈臣氏还会不定期地邀请国内外知名化妆品专家来店内进行化妆品推介和使用指导,为顾客提供更专业的服务。
最后,我们需要了解一下屈臣氏销售的化妆品的价格。相比于一些高端专业的化妆品店,屈臣氏的化妆品价格相对较为亲民,但在保证品质的前提下,仍然可以满足大多数消费者的需求。
总之,屈臣氏里的化妆品在品质、销售渠道和价格等方面都具有一定的优势。但是,我们在选择化妆品时,还是需要注重科学的运用和合理的搭配,以达到更好的化妆效果。
```
```
input: 帮我写一个英文营销方案,针对iphone
output: Marketing Plan for iPhones
Introduction
iPhones have become the flagship devices of Apple and have gained immense popularity worldwide. With a sleek design, advanced features, and a smooth user interface, iPhones attract and retain customers through their lifetime. Therefore, developing a marketing plan for iPhones is essential to ensure the continued success and growth of the brand.
Objectives
The primary objective of this marketing plan is to increase the brand awareness and demand for iPhones by targeting potential customers and creating a strong relationship with them.
Target Market
The target audience for this marketing plan is people who are in the market for a new smartphone. The target audience is classified into three segments:
1. Upgraders: People who are currently using older versions of iPhones and are looking for a new device with advanced features.
2. Switchers: People who are currently using Android smartphones and are looking to switch to iPhones.
3. Newbies: People who have never used an iPhone before and are looking to experience its features for the first time.
Tactics
To achieve the objectives, the following tactics will be implemented:
1. Advertising
Apple's advertising strategy is well-known for its focus on creating an emotional connection with the audience through its advertisements. The marketing campaign will focus on highlighting the advanced features of iPhones and the benefits of using them. The advertisements will be placed on various platforms, including social media, television, and billboards.
2. Product Demonstrations
Product demonstrations will be held at Apple stores and other retail outlets to allow potential customers to experience the advanced features of iPhones. This will help people make informed decisions about their next smartphone.
3. Referral Program
A referral program will be launched to encourage existing customers to refer their friends to the brand. This will help to attract new customers and retain existing ones.
4. Content Marketing
Content marketing will be used to create and share valuable content that will be useful to potential customers. This will include tutorials, guides, and other content that will help people make informed decisions about their next smartphone.
Conclusion
In conclusion, this marketing plan will help Apple achieve its goals of increasing brand awareness and demand for its products. The tactics employed in this plan will help to attract new customers and retain existing ones by creating a strong relationship with the brand.
```
## 限制
在当前基础模型和数据训练的模型中仍存在一些问题:
1. 当要求遵循与事实相关的指令时,模型可能会生成事实错误。
2. 由于模型仍然难以识别潜在的有害指令,偶尔会生成有害的回应。
3. 在推理和编码方面仍然需要改进
由于模型仍然存在限制,我们要求开发者只能将开源代码、数据、模型以及通过该项目生成的其他任何成果用于研究目的。不允许商业用途和其他潜在有害的使用场景。
|
morokosi/q-FrozenLake-v1-4x4-noSlippery
|
morokosi
| 2023-06-15T02:59:05Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T02:59:02Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="morokosi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/majicmixRealistic_v6
|
casque
| 2023-06-15T02:39:57Z | 0 | 7 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T02:22:35Z |
---
license: creativeml-openrail-m
---
|
Sogangina/230615_wnut_model
|
Sogangina
| 2023-06-15T02:35:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-15T02:28:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 230615_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5184534270650264
- name: Recall
type: recall
value: 0.27340129749768305
- name: F1
type: f1
value: 0.3580097087378641
- name: Accuracy
type: accuracy
value: 0.939848659740926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 230615_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2816
- Precision: 0.5185
- Recall: 0.2734
- F1: 0.3580
- Accuracy: 0.9398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2876 | 0.3942 | 0.2141 | 0.2775 | 0.9363 |
| No log | 2.0 | 426 | 0.2816 | 0.5185 | 0.2734 | 0.3580 | 0.9398 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/add_BERT_no_pretrain_mnli
|
gokuls
| 2023-06-15T02:32:17Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-29T09:50:23Z |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: add_BERT_no_pretrain_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3522172497965826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_BERT_no_pretrain_mnli
This model is a fine-tuned version of [](https://huggingface.co/) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0984
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1024 | 1.0 | 3068 | 1.0987 | 0.3182 |
| 1.0997 | 2.0 | 6136 | 1.0986 | 0.3274 |
| 1.0988 | 3.0 | 9204 | 1.0984 | 0.3545 |
| 1.0987 | 4.0 | 12272 | 1.0986 | 0.3545 |
| 1.0986 | 5.0 | 15340 | 1.0986 | 0.3545 |
| 1.0987 | 6.0 | 18408 | 1.0986 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0986 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0986 | 0.3274 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hw2942/bert-base-chinese-finetuning-financial-news-sentiment-test
|
hw2942
| 2023-06-15T02:05:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"finance",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T01:37:20Z |
---
language:
- zh
widget:
- text: 沪指收报3233.67点,涨0.15%,成交额3772亿元
- text: 中国5月新增社融和新增人民币贷款均较去年同期下降,社融新增1.56万亿元,居民中长期贷款增加1684亿元,居民存款增加5364亿元,M2-M1剪刀差缩窄
- text: 人民币兑美元中间价报7.1498,下调286点
- text: 发改委等八部门:支持符合条件的产教融合型企业上市融资
tags:
- generated_from_trainer
- finance
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuning-financial-news-sentiment-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuning-financial-news-sentiment-test
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7692
- Accuracy: 0.7964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.6425 | 0.7660 |
| 0.4822 | 2.0 | 500 | 0.7692 | 0.7964 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation
|
ZhangCheng
| 2023-06-15T02:03:11Z | 115 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"Question Generation",
"en",
"dataset:squad",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5v1.1 <context> Cheng fine-tuned T5v1.1 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5v1.1 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5v1.1-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer:str, context:str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids'].to(self.device)
attention_mask = encoding['attention_mask'].to(self.device)
outputs = self.model.generate(
input_ids = input_ids,
attention_mask = attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens = True,
clean_up_tokenization_spaces = True
)
return {'question': question, 'answer': answer}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5v1.1 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5v1.1 on SQuAD?
```
|
ZhangCheng/T5-Base-finetuned-for-Question-Generation
|
ZhangCheng
| 2023-06-15T02:00:56Z | 176 | 14 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"t5",
"text2text-generation",
"Question Generation",
"en",
"dataset:squad",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- squad
tags:
- Question Generation
widget:
- text: "<answer> T5 <context> Cheng fine-tuned T5 on SQuAD for question generation."
example_title: "Example 1"
- text: "<answer> SQuAD <context> Cheng fine-tuned T5 on SQuAD dataset for question generation."
example_title: "Example 2"
- text: "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio."
example_title: "Example 3"
---
# T5-Base Fine-Tuned on SQuAD for Question Generation
### Model in Action:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
trained_model_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
trained_tokenizer_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation'
class QuestionGeneration:
def __init__(self, model_dir=None):
self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path)
self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.model.to(self.device)
self.model.eval()
def generate(self, answer: str, context: str):
input_text = '<answer> %s <context> %s ' % (answer, context)
encoding = self.tokenizer.encode_plus(
input_text,
return_tensors='pt'
)
input_ids = encoding['input_ids']
attention_mask = encoding['attention_mask']
outputs = self.model.generate(
input_ids=input_ids,
attention_mask=attention_mask
)
question = self.tokenizer.decode(
outputs[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True
)
return {'question': question, 'answer': answer, 'context': context}
if __name__ == "__main__":
context = 'ZhangCheng fine-tuned T5 on SQuAD dataset for question generation.'
answer = 'ZhangCheng'
QG = QuestionGeneration()
qa = QG.generate(answer, context)
print(qa['question'])
# Output:
# Who fine-tuned T5 on SQuAD dataset for question generation?
```
|
gameofdimension/poca-SoccerTwos
|
gameofdimension
| 2023-06-15T01:54:21Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-14T13:17:27Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: felixdae/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
arminmrm93/q-FrozenLake-v1-4x4-noSlippery
|
arminmrm93
| 2023-06-15T01:38:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-15T01:38:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arminmrm93/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
coreml-community/coreml-lyriel_v16
|
coreml-community
| 2023-06-15T01:28:41Z | 0 | 6 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-23T01:02:09Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
inference: true
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).<br>
# lyriel_v16
Source(s): [CivitAI](https://civitai.com/models/22922)<br>
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c028d3fe-669c-44c0-9eac-b9d67f729492/width=450/00014-1261263585.jpeg" alt="00014-1261263585.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8aa5c19b-57a2-4a02-9a62-61320f459ef5/width=450/00008-3464469766.jpeg" alt="00008-3464469766.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/95d66227-e6ac-4140-b187-355e111ee197/width=450/00018-2085175018.jpeg" alt="00018-2085175018.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/21ac6b1e-ac11-4012-80d1-b0ea14951c5f/width=450/00029-1583775480.jpeg" alt="00029-1583775480.png" style="max-height: 100%; max-width: 100%;">
<img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8ce261c6-759d-46fc-b8c1-f4a46cb449d4/width=450/00023-376328055.jpeg" alt="00023-376328055.png" style="max-height: 100%; max-width: 100%;">
Hello, the model was created as an artistic style, the model can do almost anything, the main thing is to follow the promt, hands and eyes looks good for the most cases
Model Information:
This model is generally designed for portraits and full-length anime style photos. Fantastic landscapes are quite decent. And it doesn't require kilometer-long queries to get a high-quality result.
Recommend: DPM++2M Karras, Clip skip 2 Sampler, Steps: 25-35+
This model would not have come out without XpucT's help, which made Deliberate
If you have the desire and means to support future models, here you go:
Advanced Cash - U 1281 8592 6885 , E 8642 3924 9315 , R 1339 7462 2915
PEYEER - P1075963156
I hope you like it, thanks for the feedback
Hide
|
mannamvarun/image_denoising
|
mannamvarun
| 2023-06-15T00:55:43Z | 0 | 1 | null |
[
"image-to-image",
"en",
"region:us"
] |
image-to-image
| 2023-06-15T00:45:52Z |
---
language:
- en
pipeline_tag: image-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Varun Mannam
- **Shared by [optional]:** Varun Mannam
- **Model type:** Image-to-image regression
- **Language(s) (NLP):** [More Information Needed]
- **License:** GPL
- **Finetuned from model [optional]:** No need
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/ND-HowardGroup/Instant-Image-Denoising/tree/master/Plugins/Image_Denoising_Plugins_Journal
- **Paper [optional]:** https://opg.optica.org/optica/fulltext.cfm?uri=optica-9-4-335&id=470659
- **Demo [optional]:** https://github.com/ND-HowardGroup/Instant-Image-Denoising/tree/master/Plugins/Model_validation
|
nolanaatama/shnhlrcllctnftrtrs
|
nolanaatama
| 2023-06-15T00:53:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T00:51:20Z |
---
license: creativeml-openrail-m
---
|
bagassword21/mylina
|
bagassword21
| 2023-06-15T00:17:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-15T00:16:20Z |
---
license: creativeml-openrail-m
---
|
AustinCarthy/OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-06-15T00:03:58Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-14T21:51:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OnlyPhishGPT2_subdomain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_OnlyPhishGPT2_using_phish_95K_top_p_0.75subdomain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0219
- Accuracy: 0.9978
- F1: 0.9769
- Precision: 0.9979
- Recall: 0.9568
- Roc Auc Score: 0.9783
- Tpr At Fpr 0.01: 0.9552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0063 | 1.0 | 21554 | 0.0127 | 0.9974 | 0.9725 | 0.9933 | 0.9526 | 0.9761 | 0.914 |
| 0.0038 | 2.0 | 43108 | 0.0136 | 0.9977 | 0.9751 | 0.9940 | 0.957 | 0.9784 | 0.8992 |
| 0.0015 | 3.0 | 64662 | 0.0147 | 0.9977 | 0.9755 | 0.9958 | 0.956 | 0.9779 | 0.9498 |
| 0.001 | 4.0 | 86216 | 0.0123 | 0.9981 | 0.9798 | 0.9979 | 0.9624 | 0.9812 | 0.9622 |
| 0.0 | 5.0 | 107770 | 0.0219 | 0.9978 | 0.9769 | 0.9979 | 0.9568 | 0.9783 | 0.9552 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
escaa/my_awesome_model
|
escaa
| 2023-06-14T23:59:46Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-13T14:19:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: escaa/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# escaa/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0804
- Validation Loss: 0.1653
- Train Accuracy: 0.9463
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2161 | 0.1680 | 0.9432 | 0 |
| 0.1254 | 0.1625 | 0.9455 | 1 |
| 0.0804 | 0.1653 | 0.9463 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sdan/simple-embeddings
|
sdan
| 2023-06-14T23:42:54Z | 5 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"region:us"
] |
feature-extraction
| 2023-06-14T23:40:03Z |
---
library_name: "transformers.js"
---
https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
peteozegov/q-Taxi-v3
|
peteozegov
| 2023-06-14T23:36:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-30T22:06:48Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 8.40 +/- 2.54
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="peteozegov/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.