modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Mofe/speech-sprint-test
|
Mofe
| 2022-02-08T18:32:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 207.6065
- Wer: 1.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
|
espnet
| 2022-02-08T18:13:51Z | 2 | 1 |
espnet
|
[
"espnet",
"audio",
"speech-translation",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- speech-translation
language: noinfo
datasets:
- iwslt22_dialect
license: cc-by-4.0
---
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 12:54:12 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
pen2_st_model_valid.acc.ave|13.9|44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp_len = 36614 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: true
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text.tc.en
- text
- text
- - dump/raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text.tc.en
- text
- text
- - dump/raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
|
espnet
| 2022-02-08T16:35:06Z | 2 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- iwslt22_dialect
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Feb 2 05:32:30 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `99581e0f5af3ad68851d556645e7292771436df9`
- Commit date: `Sat Jan 29 11:32:38 2022 -0500`
## asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|27370|54.7|39.5|5.8|8.8|54.2|87.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|145852|84.1|7.1|8.8|11.5|27.4|87.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|64424|63.8|22.8|13.4|12.2|48.3|87.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55101
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
arredondos/my_sentence_transformer
|
arredondos
| 2022-02-08T13:10:36Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
tesemnikov-av/rubert-ner-toxicity
|
tesemnikov-av
| 2022-02-08T12:52:32Z | 80 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: "Ну ты и придурок!!"
---
NER Toxic models
Fine-tuning [cointegrated/rubert-tiny-toxicity](https://huggingface.co/cointegrated/rubert-tiny-toxicity) model on data from [toxic_dataset_ner](https://huggingface.co/datasets/tesemnikov-av/toxic_dataset_ner)
language: RU
```python
!pip install transformers > /dev/null
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
pipeline
)
model = AutoModelForTokenClassification.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
tokenizer = AutoTokenizer.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
pipe = pipeline(model=model, tokenizer=tokenizer, task='ner', aggregation_strategy='average')
text = "Они охриневшие там все придурки!!"
print(text)
print(pipe(text))
```
|
imfiba1991/gpt2-wikitext2
|
imfiba1991
| 2022-02-08T10:53:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 8.1476 |
| No log | 2.0 | 26 | 7.4435 |
| No log | 3.0 | 39 | 7.2082 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
edugp/wav2vec2-xls-r-300m-cv8-es
|
edugp
| 2022-02-08T08:57:24Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-cv8-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-cv8-es
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2115
- eval_wer: 0.1931
- eval_runtime: 859.964
- eval_samples_per_second: 17.954
- eval_steps_per_second: 2.244
- epoch: 6.97
- step: 50000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hgharibi/wav2vec2-xls-r-300m-fa-colab
|
hgharibi
| 2022-02-08T05:54:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-fa-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-fa-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4404
- Wer: 0.4402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.083 | 0.75 | 300 | 3.0037 | 1.0 |
| 1.5795 | 1.5 | 600 | 0.9167 | 0.7638 |
| 0.658 | 2.25 | 900 | 0.5737 | 0.5595 |
| 0.4213 | 3.0 | 1200 | 0.4404 | 0.4402 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final
|
LegolasTheElf
| 2022-02-08T04:27:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- Openslr Multilingual
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: Wav2Vec2_xls_r_300m_hi_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hyesunyun/NonsenseUpdateDiffStringBart
|
hyesunyun
| 2022-02-08T04:10:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"diff generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- diff generation
datasets:
- nonsense corpus
metrics:
- rouge
---
hello! this is the pretrained BART. The dataset used for pretraining is nonsense summary corpus with output as diff.
|
jgammack/SAE-distilbert-base-uncased-squad
|
jgammack
| 2022-02-08T04:03:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: SAE-distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/SAE-distilbert-base-uncased](https://huggingface.co/jgammack/SAE-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gagan3012/ViTGPT2I2A
|
gagan3012
| 2022-02-08T03:27:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-captioning",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-captioning
- generated_from_trainer
model-index:
- name: ViTGPT2I2A
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2I2A
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the vizwiz dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1528 | 0.17 | 1000 | 0.0869 |
| 0.0899 | 0.34 | 2000 | 0.0817 |
| 0.084 | 0.51 | 3000 | 0.0790 |
| 0.0814 | 0.68 | 4000 | 0.0773 |
| 0.0803 | 0.85 | 5000 | 0.0757 |
| 0.077 | 1.02 | 6000 | 0.0745 |
| 0.0739 | 1.19 | 7000 | 0.0740 |
| 0.0719 | 1.37 | 8000 | 0.0737 |
| 0.0717 | 1.54 | 9000 | 0.0730 |
| 0.0731 | 1.71 | 10000 | 0.0727 |
| 0.0708 | 1.88 | 11000 | 0.0720 |
| 0.0697 | 2.05 | 12000 | 0.0717 |
| 0.0655 | 2.22 | 13000 | 0.0719 |
| 0.0653 | 2.39 | 14000 | 0.0719 |
| 0.0657 | 2.56 | 15000 | 0.0712 |
| 0.0663 | 2.73 | 16000 | 0.0710 |
| 0.0654 | 2.9 | 17000 | 0.0708 |
| 0.0645 | 3.07 | 18000 | 0.0716 |
| 0.0616 | 3.24 | 19000 | 0.0712 |
| 0.0607 | 3.41 | 20000 | 0.0712 |
| 0.0611 | 3.58 | 21000 | 0.0711 |
| 0.0615 | 3.76 | 22000 | 0.0711 |
| 0.0614 | 3.93 | 23000 | 0.0710 |
| 0.0594 | 4.1 | 24000 | 0.0716 |
| 0.0587 | 4.27 | 25000 | 0.0715 |
| 0.0574 | 4.44 | 26000 | 0.0715 |
| 0.0579 | 4.61 | 27000 | 0.0715 |
| 0.0581 | 4.78 | 28000 | 0.0715 |
| 0.0579 | 4.95 | 29000 | 0.0715 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
softcatala/wav2vec2-large-100k-voxpopuli-catala
|
softcatala
| 2022-02-08T02:20:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"speech-to-text",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- speech-to-text
license: apache-2.0
model-index:
- name: Catalan VoxPopuli Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 5.98
- name: Google Crowsourced Corpus WER
type: wer
value: 12.14
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 12.02
---
# Wav2Vec2-Large-100k-VoxPopuli-Català
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% |
| Audiobook “La llegenda de Sant Jordi” | 12.02% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
jgammack/MTL-distilbert-base-uncased
|
jgammack
| 2022-02-07T23:23:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5593 | 1.0 | 99 | 2.3163 |
| 2.4346 | 2.0 | 198 | 2.2918 |
| 2.3377 | 3.0 | 297 | 2.2345 |
| 2.2953 | 4.0 | 396 | 2.1463 |
| 2.2296 | 5.0 | 495 | 2.1761 |
| 2.2235 | 6.0 | 594 | 2.0721 |
| 2.1878 | 7.0 | 693 | 2.1460 |
| 2.1569 | 8.0 | 792 | 2.0856 |
| 2.1455 | 9.0 | 891 | 2.1039 |
| 2.1391 | 10.0 | 990 | 2.1112 |
| 2.1056 | 11.0 | 1089 | 2.0694 |
| 2.1076 | 12.0 | 1188 | 2.0501 |
| 2.0919 | 13.0 | 1287 | 2.0484 |
| 2.0669 | 14.0 | 1386 | 2.0342 |
| 2.0595 | 15.0 | 1485 | 2.0802 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/MTL-bert-base-uncased
|
jgammack
| 2022-02-07T23:09:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4409 | 1.0 | 99 | 2.1982 |
| 2.2905 | 2.0 | 198 | 2.1643 |
| 2.1974 | 3.0 | 297 | 2.1168 |
| 2.15 | 4.0 | 396 | 2.0023 |
| 2.0823 | 5.0 | 495 | 2.0199 |
| 2.0752 | 6.0 | 594 | 1.9061 |
| 2.0408 | 7.0 | 693 | 1.9770 |
| 1.9984 | 8.0 | 792 | 1.9322 |
| 1.9933 | 9.0 | 891 | 1.9167 |
| 1.9806 | 10.0 | 990 | 1.9652 |
| 1.9436 | 11.0 | 1089 | 1.9308 |
| 1.9491 | 12.0 | 1188 | 1.9064 |
| 1.929 | 13.0 | 1287 | 1.8831 |
| 1.9096 | 14.0 | 1386 | 1.8927 |
| 1.9032 | 15.0 | 1485 | 1.9117 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/MTL-roberta-base
|
jgammack
| 2022-02-07T22:45:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MTL-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8338 | 1.0 | 98 | 1.6750 |
| 1.7732 | 2.0 | 196 | 1.6229 |
| 1.7208 | 3.0 | 294 | 1.6131 |
| 1.6917 | 4.0 | 392 | 1.5936 |
| 1.6579 | 5.0 | 490 | 1.6183 |
| 1.6246 | 6.0 | 588 | 1.6015 |
| 1.6215 | 7.0 | 686 | 1.5248 |
| 1.5743 | 8.0 | 784 | 1.5454 |
| 1.5621 | 9.0 | 882 | 1.5925 |
| 1.5652 | 10.0 | 980 | 1.5213 |
| 1.5615 | 11.0 | 1078 | 1.4845 |
| 1.5349 | 12.0 | 1176 | 1.5443 |
| 1.5165 | 13.0 | 1274 | 1.5304 |
| 1.5164 | 14.0 | 1372 | 1.4773 |
| 1.5293 | 15.0 | 1470 | 1.5537 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-roberta-base
|
jgammack
| 2022-02-07T22:14:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SAE-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9847 | 1.0 | 79 | 1.8238 |
| 1.9142 | 2.0 | 158 | 1.8299 |
| 1.8613 | 3.0 | 237 | 1.7636 |
| 1.8384 | 4.0 | 316 | 1.8048 |
| 1.8193 | 5.0 | 395 | 1.7734 |
| 1.7985 | 6.0 | 474 | 1.7271 |
| 1.7758 | 7.0 | 553 | 1.8525 |
| 1.7611 | 8.0 | 632 | 1.7716 |
| 1.7599 | 9.0 | 711 | 1.7913 |
| 1.7118 | 10.0 | 790 | 1.7578 |
| 1.7003 | 11.0 | 869 | 1.7598 |
| 1.7072 | 12.0 | 948 | 1.6942 |
| 1.6511 | 13.0 | 1027 | 1.6955 |
| 1.6802 | 14.0 | 1106 | 1.7837 |
| 1.7048 | 15.0 | 1185 | 1.7377 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
robot-test/old-clip-tokenizer
|
robot-test
| 2022-02-07T21:44:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
Old version of the CLIP fast tokenizer
cf [this issue](https://github.com/huggingface/transformers/issues/12648) on transformers
|
nateraw/codecarbon-text-classification
|
nateraw
| 2022-02-07T20:30:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: codecarbon-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codecarbon-text-classification
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jiobiala24/wav2vec2-base-checkpoint-11.1
|
jiobiala24
| 2022-02-07T19:33:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-11.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-11.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-10](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-10) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0173
- Wer: 0.3350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2788 | 1.52 | 1000 | 0.5776 | 0.3410 |
| 0.2277 | 3.04 | 2000 | 0.6148 | 0.3465 |
| 0.1772 | 4.56 | 3000 | 0.6497 | 0.3497 |
| 0.1528 | 6.08 | 4000 | 0.6786 | 0.3430 |
| 0.1285 | 7.6 | 5000 | 0.6779 | 0.3489 |
| 0.1104 | 9.12 | 6000 | 0.7417 | 0.3528 |
| 0.0965 | 10.64 | 7000 | 0.7956 | 0.3477 |
| 0.0914 | 12.16 | 8000 | 0.7994 | 0.3570 |
| 0.082 | 13.68 | 9000 | 0.8690 | 0.3510 |
| 0.0788 | 15.2 | 10000 | 0.8569 | 0.3526 |
| 0.0727 | 16.72 | 11000 | 0.8885 | 0.3440 |
| 0.0656 | 18.24 | 12000 | 0.9586 | 0.3476 |
| 0.0608 | 19.76 | 13000 | 0.9317 | 0.3495 |
| 0.0588 | 21.28 | 14000 | 0.9809 | 0.3449 |
| 0.0547 | 22.8 | 15000 | 0.9552 | 0.3421 |
| 0.0519 | 24.32 | 16000 | 0.9782 | 0.3380 |
| 0.0474 | 25.84 | 17000 | 0.9923 | 0.3386 |
| 0.046 | 27.36 | 18000 | 0.9984 | 0.3347 |
| 0.045 | 28.88 | 19000 | 1.0173 | 0.3350 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
elozano/tweet_offensive_eval
|
elozano
| 2022-02-07T17:59:03Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
|
elozano/tweet_sentiment_eval
|
elozano
| 2022-02-07T17:50:59Z | 11 | 4 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "I love summer!"
example_title: "Positive"
- text: "Does anyone want to play?"
example_title: "Neutral"
- text: "This movie is just awful! 😫"
example_title: "Negative"
---
|
sukhendrasingh/finetuning-sentiment-model-3000-samples
|
sukhendrasingh
| 2022-02-07T17:20:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.879746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3323
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/cu_coquin
|
huggingtweets
| 2022-02-07T16:16:12Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cu_coquin/1644250567283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442129295477035013/15LSPrJo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Manu’</div>
<div style="text-align: center; font-size: 14px;">@cu_coquin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Manu’.
| Data | Manu’ |
| --- | --- |
| Tweets downloaded | 1982 |
| Retweets | 63 |
| Short tweets | 291 |
| Tweets kept | 1628 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jyazmuh8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cu_coquin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cu_coquin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shahukareem/wav2vec2-xls-r-300m-dv
|
shahukareem
| 2022-02-07T15:55:39Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 24.72
- name: Test CER
type: cer
value: 4.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Wer: 0.2451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9623 | 0.66 | 400 | 3.3010 | 1.0 |
| 3.2238 | 1.33 | 800 | 2.8950 | 1.0 |
| 1.1988 | 1.99 | 1200 | 0.5277 | 0.6681 |
| 0.6084 | 2.65 | 1600 | 0.4113 | 0.5831 |
| 0.4973 | 3.32 | 2000 | 0.3538 | 0.5333 |
| 0.4476 | 3.98 | 2400 | 0.3201 | 0.5081 |
| 0.3999 | 4.64 | 2800 | 0.2917 | 0.4759 |
| 0.3779 | 5.31 | 3200 | 0.2788 | 0.4672 |
| 0.3457 | 5.97 | 3600 | 0.2667 | 0.4557 |
| 0.3222 | 6.63 | 4000 | 0.2549 | 0.4452 |
| 0.3129 | 7.3 | 4400 | 0.2491 | 0.4266 |
| 0.2927 | 7.96 | 4800 | 0.2488 | 0.4246 |
| 0.2786 | 8.62 | 5200 | 0.2429 | 0.4145 |
| 0.2756 | 9.29 | 5600 | 0.2453 | 0.4150 |
| 0.258 | 9.95 | 6000 | 0.2282 | 0.4109 |
| 0.251 | 10.61 | 6400 | 0.2307 | 0.4012 |
| 0.2397 | 11.28 | 6800 | 0.2275 | 0.4 |
| 0.2312 | 11.94 | 7200 | 0.2244 | 0.3889 |
| 0.2323 | 12.6 | 7600 | 0.2247 | 0.3983 |
| 0.216 | 13.27 | 8000 | 0.2301 | 0.3863 |
| 0.2169 | 13.93 | 8400 | 0.2224 | 0.3782 |
| 0.2089 | 14.59 | 8800 | 0.2276 | 0.3771 |
| 0.2042 | 15.26 | 9200 | 0.2286 | 0.3784 |
| 0.1953 | 15.92 | 9600 | 0.2235 | 0.3822 |
| 0.1876 | 16.58 | 10000 | 0.2267 | 0.3674 |
| 0.186 | 17.25 | 10400 | 0.2295 | 0.3676 |
| 0.1847 | 17.91 | 10800 | 0.2244 | 0.3608 |
| 0.178 | 18.57 | 11200 | 0.2229 | 0.3526 |
| 0.1751 | 19.24 | 11600 | 0.2219 | 0.3483 |
| 0.17 | 19.9 | 12000 | 0.2241 | 0.3503 |
| 0.1641 | 20.56 | 12400 | 0.2187 | 0.3403 |
| 0.1629 | 21.23 | 12800 | 0.2135 | 0.3433 |
| 0.1568 | 21.89 | 13200 | 0.2117 | 0.3358 |
| 0.1585 | 22.55 | 13600 | 0.2151 | 0.3332 |
| 0.1512 | 23.22 | 14000 | 0.2097 | 0.3344 |
| 0.1427 | 23.88 | 14400 | 0.2119 | 0.3255 |
| 0.1458 | 24.54 | 14800 | 0.2209 | 0.3213 |
| 0.1413 | 25.21 | 15200 | 0.2228 | 0.3202 |
| 0.1363 | 25.87 | 15600 | 0.2071 | 0.3207 |
| 0.1302 | 26.53 | 16000 | 0.2094 | 0.3138 |
| 0.1283 | 27.2 | 16400 | 0.2193 | 0.3132 |
| 0.1278 | 27.86 | 16800 | 0.2197 | 0.3103 |
| 0.1271 | 28.52 | 17200 | 0.2133 | 0.3009 |
| 0.1243 | 29.19 | 17600 | 0.2202 | 0.3026 |
| 0.1182 | 29.85 | 18000 | 0.2092 | 0.3046 |
| 0.1171 | 30.51 | 18400 | 0.2142 | 0.2947 |
| 0.1156 | 31.18 | 18800 | 0.2219 | 0.2926 |
| 0.1129 | 31.84 | 19200 | 0.2194 | 0.2848 |
| 0.1099 | 32.5 | 19600 | 0.2218 | 0.2869 |
| 0.1045 | 33.17 | 20000 | 0.2183 | 0.2803 |
| 0.1057 | 33.83 | 20400 | 0.2242 | 0.2896 |
| 0.1056 | 34.49 | 20800 | 0.2189 | 0.2838 |
| 0.1039 | 35.16 | 21200 | 0.2256 | 0.2819 |
| 0.1007 | 35.82 | 21600 | 0.2196 | 0.2743 |
| 0.1012 | 36.48 | 22000 | 0.2218 | 0.2752 |
| 0.098 | 37.15 | 22400 | 0.2181 | 0.2721 |
| 0.0963 | 37.81 | 22800 | 0.2162 | 0.2691 |
| 0.0943 | 38.47 | 23200 | 0.2148 | 0.2686 |
| 0.0959 | 39.14 | 23600 | 0.2194 | 0.2658 |
| 0.0904 | 39.8 | 24000 | 0.2170 | 0.2641 |
| 0.0898 | 40.46 | 24400 | 0.2129 | 0.2585 |
| 0.0886 | 41.13 | 24800 | 0.2199 | 0.2606 |
| 0.088 | 41.79 | 25200 | 0.2155 | 0.2595 |
| 0.0863 | 42.45 | 25600 | 0.2169 | 0.2564 |
| 0.0876 | 43.12 | 26000 | 0.2178 | 0.2529 |
| 0.0827 | 43.78 | 26400 | 0.2171 | 0.2559 |
| 0.087 | 44.44 | 26800 | 0.2192 | 0.2530 |
| 0.0818 | 45.11 | 27200 | 0.2180 | 0.2496 |
| 0.0811 | 45.77 | 27600 | 0.2207 | 0.2502 |
| 0.0828 | 46.43 | 28000 | 0.2186 | 0.2502 |
| 0.0796 | 47.1 | 28400 | 0.2203 | 0.2468 |
| 0.0804 | 47.76 | 28800 | 0.2201 | 0.2453 |
| 0.0791 | 48.42 | 29200 | 0.2204 | 0.2477 |
| 0.0777 | 49.09 | 29600 | 0.2197 | 0.2466 |
| 0.0775 | 49.75 | 30000 | 0.2206 | 0.2451 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
willemjan/indo2
|
willemjan
| 2022-02-07T09:17:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
Llamacha/QuBERTa
|
Llamacha
| 2022-02-07T09:14:51Z | 52 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Llamacha",
"qu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- qu
tags:
- Llamacha
---
# QuBERTa
QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).
El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.
## Usabilidad
Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `.
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./QuBERTa",
tokenizer="./QuBERTa"
)
```
Se hace la prueba, la cual esta en fases de mejoras.
```python
fill_mask("allinllachu <mask> allinlla huk wasipita.")
```
[{'score': 0.23992203176021576,
'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',
'token': 334,
'token_str': ' nisqaqa'},
{'score': 0.061005301773548126,
'sequence': 'allinllachu, allinlla huk wasipita.',
'token': 16,
'token_str': ','},
{'score': 0.028720015659928322,
'sequence': "allinllachu' allinlla huk wasipita.",
'token': 11,
'token_str': "'"},
{'score': 0.012927944771945477,
'sequence': 'allinllachu kay allinlla huk wasipita.',
'token': 377,
'token_str': ' kay'},
{'score': 0.01230092253535986,
'sequence': 'allinllachu. allinlla huk wasipita.',
'token': 18,
'token_str': '.'}]
|
willemjan/indo1
|
willemjan
| 2022-02-07T09:14:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
ayameRushia/wav2vec2-large-xls-r-300m-ar
|
ayameRushia
| 2022-02-07T09:03:17Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4819
- Wer: 0.4244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 11.0435 | 0.67 | 400 | 4.3104 | 1.0 |
| 3.4451 | 1.34 | 800 | 3.1566 | 1.0 |
| 3.1399 | 2.01 | 1200 | 3.0532 | 0.9990 |
| 2.8538 | 2.68 | 1600 | 1.6994 | 0.9238 |
| 1.7195 | 3.35 | 2000 | 0.8867 | 0.6727 |
| 1.326 | 4.02 | 2400 | 0.6603 | 0.5834 |
| 1.1561 | 4.69 | 2800 | 0.5809 | 0.5479 |
| 1.0764 | 5.36 | 3200 | 0.5943 | 0.5495 |
| 1.0144 | 6.03 | 3600 | 0.5344 | 0.5251 |
| 0.965 | 6.7 | 4000 | 0.4844 | 0.4936 |
| 0.927 | 7.37 | 4400 | 0.5048 | 0.5019 |
| 0.8985 | 8.04 | 4800 | 0.5809 | 0.5267 |
| 0.8684 | 8.71 | 5200 | 0.4740 | 0.4753 |
| 0.8581 | 9.38 | 5600 | 0.4813 | 0.4834 |
| 0.8334 | 10.05 | 6000 | 0.4515 | 0.4545 |
| 0.8134 | 10.72 | 6400 | 0.4370 | 0.4543 |
| 0.8002 | 11.39 | 6800 | 0.4225 | 0.4384 |
| 0.7884 | 12.06 | 7200 | 0.4593 | 0.4565 |
| 0.7675 | 12.73 | 7600 | 0.4752 | 0.4680 |
| 0.7607 | 13.4 | 8000 | 0.4950 | 0.4771 |
| 0.7475 | 14.07 | 8400 | 0.4373 | 0.4391 |
| 0.7397 | 14.74 | 8800 | 0.4506 | 0.4541 |
| 0.7289 | 15.41 | 9200 | 0.4840 | 0.4691 |
| 0.722 | 16.08 | 9600 | 0.4701 | 0.4571 |
| 0.7067 | 16.75 | 10000 | 0.4561 | 0.4461 |
| 0.7033 | 17.42 | 10400 | 0.4384 | 0.4347 |
| 0.6915 | 18.09 | 10800 | 0.4424 | 0.4290 |
| 0.6854 | 18.76 | 11200 | 0.4635 | 0.4360 |
| 0.6813 | 19.43 | 11600 | 0.4280 | 0.4147 |
| 0.6776 | 20.1 | 12000 | 0.4610 | 0.4344 |
| 0.67 | 20.77 | 12400 | 0.4540 | 0.4367 |
| 0.6653 | 21.44 | 12800 | 0.4509 | 0.4234 |
| 0.6609 | 22.11 | 13200 | 0.4874 | 0.4444 |
| 0.6541 | 22.78 | 13600 | 0.4542 | 0.4230 |
| 0.6528 | 23.45 | 14000 | 0.4732 | 0.4373 |
| 0.6463 | 24.12 | 14400 | 0.4483 | 0.4188 |
| 0.6399 | 24.79 | 14800 | 0.4731 | 0.4341 |
| 0.6353 | 25.46 | 15200 | 0.5031 | 0.4412 |
| 0.6358 | 26.13 | 15600 | 0.4986 | 0.4397 |
| 0.6317 | 26.8 | 16000 | 0.5000 | 0.4360 |
| 0.6262 | 27.47 | 16400 | 0.4958 | 0.4318 |
| 0.6317 | 28.14 | 16800 | 0.4738 | 0.4234 |
| 0.6205 | 28.81 | 17200 | 0.4853 | 0.4262 |
| 0.6205 | 29.48 | 17600 | 0.4819 | 0.4244 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bespin-global/klue-sentence-roberta-base-kornlu
|
bespin-global
| 2022-02-07T07:14:21Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:kor_nlu",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- kor_nlu
license: cc-by-nc-4.0
---
# bespin-global/klue-sentence-roberta-kornlu
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bespin-global/klue-sentence-roberta-kornlu')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bespin-global/klue-sentence-roberta-kornlu')
model = AutoModel.from_pretrained('bespin-global/klue-sentence-roberta-kornlu')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
|
bespin-global/klue-sentence-roberta-base
|
bespin-global
| 2022-02-07T07:14:05Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:klue",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- klue
license: cc-by-nc-4.0
---
# bespin-global/klue-sentence-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bespin-global/klue-sentence-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bespin-global/klue-sentence-roberta-base')
model = AutoModel.from_pretrained('bespin-global/klue-sentence-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bespin-global/klue-sentence-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 219,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
|
gagan3012/ViTGPT2_vizwiz
|
gagan3012
| 2022-02-07T05:54:26Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
- image-to-text
model-index:
- name: ViTGPT2_vizwiz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2_vizwiz
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1207 | 0.07 | 1000 | 0.0906 |
| 0.0916 | 0.14 | 2000 | 0.0861 |
| 0.0879 | 0.2 | 3000 | 0.0840 |
| 0.0856 | 0.27 | 4000 | 0.0822 |
| 0.0834 | 0.34 | 5000 | 0.0806 |
| 0.0817 | 0.41 | 6000 | 0.0795 |
| 0.0812 | 0.48 | 7000 | 0.0785 |
| 0.0808 | 0.55 | 8000 | 0.0779 |
| 0.0796 | 0.61 | 9000 | 0.0771 |
| 0.0786 | 0.68 | 10000 | 0.0767 |
| 0.0774 | 0.75 | 11000 | 0.0762 |
| 0.0772 | 0.82 | 12000 | 0.0758 |
| 0.0756 | 0.89 | 13000 | 0.0754 |
| 0.0759 | 0.96 | 14000 | 0.0750 |
| 0.0756 | 1.02 | 15000 | 0.0748 |
| 0.0726 | 1.09 | 16000 | 0.0745 |
| 0.0727 | 1.16 | 17000 | 0.0745 |
| 0.0715 | 1.23 | 18000 | 0.0742 |
| 0.0726 | 1.3 | 19000 | 0.0741 |
| 0.072 | 1.37 | 20000 | 0.0738 |
| 0.0723 | 1.43 | 21000 | 0.0735 |
| 0.0715 | 1.5 | 22000 | 0.0734 |
| 0.0724 | 1.57 | 23000 | 0.0732 |
| 0.0723 | 1.64 | 24000 | 0.0730 |
| 0.0718 | 1.71 | 25000 | 0.0729 |
| 0.07 | 1.78 | 26000 | 0.0728 |
| 0.0702 | 1.84 | 27000 | 0.0726 |
| 0.0704 | 1.91 | 28000 | 0.0725 |
| 0.0703 | 1.98 | 29000 | 0.0725 |
| 0.0686 | 2.05 | 30000 | 0.0726 |
| 0.0687 | 2.12 | 31000 | 0.0726 |
| 0.0688 | 2.19 | 32000 | 0.0724 |
| 0.0677 | 2.25 | 33000 | 0.0724 |
| 0.0665 | 2.32 | 34000 | 0.0725 |
| 0.0684 | 2.39 | 35000 | 0.0723 |
| 0.0678 | 2.46 | 36000 | 0.0722 |
| 0.0686 | 2.53 | 37000 | 0.0722 |
| 0.067 | 2.59 | 38000 | 0.0721 |
| 0.0669 | 2.66 | 39000 | 0.0721 |
| 0.0673 | 2.73 | 40000 | 0.0721 |
| 0.0673 | 2.8 | 41000 | 0.0720 |
| 0.0662 | 2.87 | 42000 | 0.0720 |
| 0.0681 | 2.94 | 43000 | 0.0719 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jerrychatz/wav2vec2-large-xls-r-300m-greek
|
jerrychatz
| 2022-02-07T03:06:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-greek
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-greek
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4823
- Wer: 0.3338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0106 | 1.72 | 200 | 0.5519 | 0.3537 |
| 0.0249 | 3.45 | 400 | 0.5174 | 0.3465 |
| 0.0206 | 5.17 | 600 | 0.4721 | 0.3323 |
| 0.0221 | 6.89 | 800 | 0.4652 | 0.3373 |
| 0.0204 | 8.62 | 1000 | 0.4883 | 0.3389 |
| 0.0192 | 10.34 | 1200 | 0.4785 | 0.3389 |
| 0.0186 | 12.07 | 1400 | 0.4789 | 0.3378 |
| 0.0172 | 13.79 | 1600 | 0.4915 | 0.3347 |
| 0.0184 | 15.52 | 1800 | 0.4759 | 0.3440 |
| 0.0168 | 17.24 | 2000 | 0.4891 | 0.3371 |
| 0.0155 | 18.96 | 2200 | 0.4928 | 0.3394 |
| 0.0146 | 20.69 | 2400 | 0.4834 | 0.3357 |
| 0.0146 | 22.41 | 2600 | 0.4814 | 0.3362 |
| 0.0151 | 24.14 | 2800 | 0.4791 | 0.3345 |
| 0.0136 | 25.86 | 3000 | 0.4825 | 0.3356 |
| 0.0136 | 27.58 | 3200 | 0.4850 | 0.3351 |
| 0.0127 | 29.31 | 3400 | 0.4823 | 0.3338 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lvargas/distilbert-base-uncased-finetuned-emotion2
|
lvargas
| 2022-02-07T01:36:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.903
- name: F1
type: f1
value: 0.9003235459489749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Accuracy: 0.903
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 |
| 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
BigSalmon/Points2
|
BigSalmon
| 2022-02-07T00:27:54Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Converting Points or Headlines to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
```
Essay Intro (Sega Centers Classics): unyielding in its insistence on consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. this is a task that not even the most devoted fan could have foreseen.
***
Essay Intro (Blizzard Shows Video Games Are An Art): universally adored, video games have come to be revered not only as interactive diversions, but as artworks. a firm believer in this doctrine, blizzard actively works to further the craft of storytelling in their respective titles.
***
Essay Intro (What Happened To Linux): chancing upon a linux user is a rare occurrence in the present day. once a mainstay, the brand has come to only be seen in the hands of the most ardent of its followers.
```
|
fractalego/personal-speech-to-text-model
|
fractalego
| 2022-02-06T22:32:50Z | 52 | 6 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
# Personal speech to text model
s2t models often do not understand my accent, so I fine tuned this one from "facebook/wav2vec2-large-robust-ft-swbd-300h" using about 1000 recordings of my voice.
Do not download unless you have exactly my accent.
|
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW
|
StevenLimcorn
| 2022-02-06T21:57:14Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- zh-TW
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.8594
- Cer: 0.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 |
| 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 |
| 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 |
| 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 |
| 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 |
| 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 |
| 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 |
| 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 |
| 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 |
| 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 |
| 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 |
| 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 |
| 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 |
| 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 |
| 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 |
| 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 |
| 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 |
| 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 |
| 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 |
| 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 |
| 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 |
| 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 |
| 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 |
| 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 |
| 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 |
| 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 |
| 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 |
| 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 |
| 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 |
| 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 |
| 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 |
| 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 |
| 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 |
| 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 |
| 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 |
| 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 |
| 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 |
| 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 |
| 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-mr-cv8-with-lm
|
anuragshas
| 2022-02-06T16:11:16Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6693
- Wer: 0.5921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.9504 | 18.18 | 400 | 4.6730 | 1.0 |
| 3.3766 | 36.36 | 800 | 3.3464 | 1.0 |
| 3.1128 | 54.55 | 1200 | 3.0177 | 0.9980 |
| 1.7966 | 72.73 | 1600 | 0.8733 | 0.8039 |
| 1.4085 | 90.91 | 2000 | 0.5555 | 0.6458 |
| 1.1731 | 109.09 | 2400 | 0.4930 | 0.6438 |
| 1.0271 | 127.27 | 2800 | 0.4780 | 0.6093 |
| 0.9045 | 145.45 | 3200 | 0.4647 | 0.6578 |
| 0.807 | 163.64 | 3600 | 0.4505 | 0.5925 |
| 0.741 | 181.82 | 4000 | 0.4746 | 0.6025 |
| 0.6706 | 200.0 | 4400 | 0.5004 | 0.5844 |
| 0.6186 | 218.18 | 4800 | 0.4984 | 0.5997 |
| 0.5508 | 236.36 | 5200 | 0.5298 | 0.5636 |
| 0.5123 | 254.55 | 5600 | 0.5410 | 0.5110 |
| 0.4623 | 272.73 | 6000 | 0.5591 | 0.5383 |
| 0.4281 | 290.91 | 6400 | 0.5775 | 0.5600 |
| 0.4045 | 309.09 | 6800 | 0.5924 | 0.5580 |
| 0.3651 | 327.27 | 7200 | 0.5671 | 0.5684 |
| 0.343 | 345.45 | 7600 | 0.6083 | 0.5945 |
| 0.3085 | 363.64 | 8000 | 0.6243 | 0.5728 |
| 0.2941 | 381.82 | 8400 | 0.6245 | 0.5580 |
| 0.2735 | 400.0 | 8800 | 0.6458 | 0.5804 |
| 0.262 | 418.18 | 9200 | 0.6566 | 0.5824 |
| 0.2578 | 436.36 | 9600 | 0.6558 | 0.5965 |
| 0.2388 | 454.55 | 10000 | 0.6598 | 0.5993 |
| 0.2328 | 472.73 | 10400 | 0.6700 | 0.6041 |
| 0.2286 | 490.91 | 10800 | 0.6684 | 0.5957 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
asalics/distilbert-base-uncased-finetuned-emotion
|
asalics
| 2022-02-06T14:29:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9244145121183605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.924
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 |
| 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Mahalakshmi/wav2vec2-xls-r-300m-demo-colab
|
Mahalakshmi
| 2022-02-06T13:51:42Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jejomi/xls-r-ta
|
jejomi
| 2022-02-06T11:34:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TA dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Jeevesh8/feather_berts1
|
Jeevesh8
| 2022-02-06T04:52:40Z | 0 | 0 | null |
[
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04Z |
Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/).
|
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
|
DrishtiSharma
| 2022-02-05T21:58:25Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1508
- Wer: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 |
| 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 |
| 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 |
| 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 |
| 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 |
| 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 |
| 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 |
| 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 |
| 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 |
| 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
pritamdeka/PubMedBert-fulltext-cord19
|
pritamdeka
| 2022-02-05T20:56:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:pritamdeka/cord-19-fulltext",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-fulltext
metrics:
- accuracy
model-index:
- name: pubmedbert-fulltext-cord19
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: pritamdeka/cord-19-fulltext
type: pritamdeka/cord-19-fulltext
args: fulltext
metrics:
- name: Accuracy
type: accuracy
value: 0.7175316733550737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-fulltext-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the pritamdeka/cord-19-fulltext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2667
- Accuracy: 0.7175
## Model description
The model has been trained using a maximum train sample size of 300K and evaluation size of 25K due to GPU limitations
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7985 | 0.27 | 5000 | 1.2710 | 0.7176 |
| 1.7542 | 0.53 | 10000 | 1.3359 | 0.7070 |
| 1.7462 | 0.8 | 15000 | 1.3489 | 0.7034 |
| 1.8371 | 1.07 | 20000 | 1.4361 | 0.6891 |
| 1.7102 | 1.33 | 25000 | 1.3502 | 0.7039 |
| 1.6596 | 1.6 | 30000 | 1.3341 | 0.7065 |
| 1.6265 | 1.87 | 35000 | 1.3228 | 0.7087 |
| 1.605 | 2.13 | 40000 | 1.3079 | 0.7099 |
| 1.5731 | 2.4 | 45000 | 1.2986 | 0.7121 |
| 1.5602 | 2.67 | 50000 | 1.2929 | 0.7136 |
| 1.5447 | 2.93 | 55000 | 1.2875 | 0.7143 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/bouncemanautumn
|
huggingtweets
| 2022-02-05T20:35:09Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bouncemanautumn/1644093304436/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466500150759763979/_SP07dAh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">autumn wants to hold ty’s hand</div>
<div style="text-align: center; font-size: 14px;">@bouncemanautumn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from autumn wants to hold ty’s hand.
| Data | autumn wants to hold ty’s hand |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 195 |
| Short tweets | 434 |
| Tweets kept | 2616 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16mq5may/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bouncemanautumn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bouncemanautumn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
infinitejoy/wav2vec2-large-xls-r-300m-odia-cv8
|
infinitejoy
| 2022-02-05T18:24:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"or",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-odia-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-odia-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8176
- Wer: 0.5818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3957 | 20.83 | 500 | 1.0925 | 0.8111 |
| 1.0351 | 41.67 | 1000 | 0.7837 | 0.6574 |
| 0.7396 | 62.5 | 1500 | 0.7674 | 0.6083 |
| 0.5385 | 83.33 | 2000 | 0.8015 | 0.5812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
transformersbook/xlm-roberta-base-finetuned-panx-de
|
transformersbook
| 2022-02-05T17:07:41Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8645910410381922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1388
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2652 | 1.0 | 525 | 0.1602 | 0.8230 |
| 0.1314 | 2.0 | 1050 | 0.1372 | 0.8527 |
| 0.0806 | 3.0 | 1575 | 0.1388 | 0.8646 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-it
|
transformersbook
| 2022-02-05T17:07:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8215158924205379
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2445
- F1: 0.8215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 |
| 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 |
| 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/codeparrot-small-vocabulary
|
transformersbook
| 2022-02-05T17:00:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# CodeParrot
This is a small version of the CodeParrot tokenizer trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The tokenizer is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
transformersbook/distilbert-base-uncased-distilled-clinc
|
transformersbook
| 2022-02-05T16:47:39Z | 199 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9393548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned with knowledge distillation version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9031 | 1.0 | 318 | 0.5745 | 0.7365 |
| 0.4481 | 2.0 | 636 | 0.2856 | 0.8748 |
| 0.2528 | 3.0 | 954 | 0.1798 | 0.9187 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9294 |
| 0.1416 | 5.0 | 1590 | 0.1211 | 0.9348 |
| 0.1243 | 6.0 | 1908 | 0.1116 | 0.9348 |
| 0.1133 | 7.0 | 2226 | 0.1062 | 0.9377 |
| 0.1075 | 8.0 | 2544 | 0.1035 | 0.9387 |
| 0.1039 | 9.0 | 2862 | 0.1014 | 0.9381 |
| 0.1018 | 10.0 | 3180 | 0.1005 | 0.9394 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
transformersbook/distilbert-base-uncased-finetuned-clinc
|
transformersbook
| 2022-02-05T16:46:21Z | 100 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 |
| 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 |
| 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 |
| 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 |
| 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
transformersbook/bert-base-uncased-finetuned-clinc
|
transformersbook
| 2022-02-05T16:38:54Z | 922 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1909.02027",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Intent Detection with BERT
This model was trained on the [CLINC150](https://arxiv.org/abs/1909.02027) dataset for customer intent detection. The dataset can be found on the [Hub](https://huggingface.co/datasets/clinc_oos). The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
|
transformersbook/codeparrot-small
|
transformersbook
| 2022-02-05T16:28:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# CodeParrot
CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-05T11:39:58Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: distilbert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HarrisDePerceptron/xls-r-300m-ur-cv7
|
HarrisDePerceptron
| 2022-02-05T11:21:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2924
- Wer: 0.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.2783 | 4.17 | 100 | 4.6409 | 1.0 |
| 3.5578 | 8.33 | 200 | 3.1649 | 1.0 |
| 3.1279 | 12.5 | 300 | 3.0335 | 1.0 |
| 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 |
| 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 |
| 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 |
| 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 |
| 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 |
| 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 |
| 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 |
| 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 |
| 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 |
| 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 |
| 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 |
| 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 |
| 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 |
| 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 |
| 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 |
| 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 |
| 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 |
| 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 |
| 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 |
| 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 |
| 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 |
| 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 |
| 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 |
| 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 |
| 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 |
| 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 |
| 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 |
| 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 |
| 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 |
| 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 |
| 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 |
| 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 |
| 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 |
| 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 |
| 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 |
| 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 |
| 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 |
| 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 |
| 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 |
| 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 |
| 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 |
| 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 |
| 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 |
| 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 |
| 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
omoekan/opus-tatoeba-eng-yor
|
omoekan
| 2022-02-05T10:15:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
## OPUS Tatoeba English-Yoruba
This model was obtained by running the script convert_marian_to_pytorch.py with the flag -m eng-yor. The original models were trained by Jörg Tiedemann using the MarianNMT library. See all available MarianMTModel models on the profile of the Helsinki NLP group.
---
- tags: translation
- source language: English
- target language: Yoruba
- dataset: opus+bt
-model: transformer-align
-pre-processing: normalization + SentencePiece (spm12k,spm12k)
-download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.zip)
-test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.test.txt)
-test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.eval.txt)
-Benchmarks
|test set|BLEU|chr-F|
|:---|:---|:---|
|Tatoeba-test.eng-yor|13.0|0.333|
---
|
jinlmsft/t5-large-multiwoz
|
jinlmsft
| 2022-02-04T23:08:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-multiwoz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-multiwoz
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0064
- Acc: 1.0
- True Num: 56671
- Num: 56776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num |
|:-------------:|:-----:|:----:|:---------------:|:----:|:--------:|:-----:|
| 0.1261 | 1.13 | 1000 | 0.0933 | 0.98 | 55574 | 56776 |
| 0.0951 | 2.25 | 2000 | 0.0655 | 0.98 | 55867 | 56776 |
| 0.0774 | 3.38 | 3000 | 0.0480 | 0.99 | 56047 | 56776 |
| 0.0584 | 4.51 | 4000 | 0.0334 | 0.99 | 56252 | 56776 |
| 0.042 | 5.64 | 5000 | 0.0222 | 0.99 | 56411 | 56776 |
| 0.0329 | 6.76 | 6000 | 0.0139 | 1.0 | 56502 | 56776 |
| 0.0254 | 7.89 | 7000 | 0.0094 | 1.0 | 56626 | 56776 |
| 0.0214 | 9.02 | 8000 | 0.0070 | 1.0 | 56659 | 56776 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
mrm8488/roberta-base-bne-finetuned-sqac-retriever
|
mrm8488
| 2022-02-04T17:59:07Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 939 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 93,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/loverachelle2
|
huggingtweets
| 2022-02-04T17:51:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/loverachelle2/1643997109994/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1371211513323749377/ABF4NRhC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LoveRachelle2</div>
<div style="text-align: center; font-size: 14px;">@loverachelle2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LoveRachelle2.
| Data | LoveRachelle2 |
| --- | --- |
| Tweets downloaded | 1440 |
| Retweets | 102 |
| Short tweets | 92 |
| Tweets kept | 1246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1liqzipo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @loverachelle2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/284b8u8q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/loverachelle2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
samx18/demo
|
samx18
| 2022-02-04T17:23:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Dummy
This is a dummy model for testing - do not use
|
dkurt/wav2vec2-base-ft-keyword-spotting-int8
|
dkurt
| 2022-02-04T16:40:37Z | 7 | 2 |
transformers
|
[
"transformers",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
[anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
Rolv-Arild/xls-r-300m-npsc-4
|
Rolv-Arild
| 2022-02-04T16:36:33Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Wer: 0.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4527 | 0.28 | 250 | 4.0144 | 1.0 |
| 3.1828 | 0.56 | 500 | 3.1369 | 1.0 |
| 2.9927 | 0.85 | 750 | 3.0183 | 1.0 |
| 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 |
| 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 |
| 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 |
| 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 |
| 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 |
| 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 |
| 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 |
| 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 |
| 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 |
| 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 |
| 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 |
| 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 |
| 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 |
| 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 |
| 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 |
| 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 |
| 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 |
| 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 |
| 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 |
| 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 |
| 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 |
| 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 |
| 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 |
| 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 |
| 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 |
| 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 |
| 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 |
| 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 |
| 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 |
| 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 |
| 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 |
| 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 |
| 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 |
| 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 |
| 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 |
| 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 |
| 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 |
| 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 |
| 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 |
| 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 |
| 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 |
| 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 |
| 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 |
| 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 |
| 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 |
| 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 |
| 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 |
| 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 |
| 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 |
| 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 |
| 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 |
| 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 |
| 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 |
| 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 |
| 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 |
| 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 |
| 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 |
| 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 |
| 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 |
| 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 |
| 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 |
| 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 |
| 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 |
| 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 |
| 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 |
| 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 |
| 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
|
shreyasgite/wav2vec2-large-xls-r-300m-dm32
|
shreyasgite
| 2022-02-04T14:53:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xls-r-300m-dm32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dm32
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5688
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 22
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.41 | 34 | 0.6769 | 0.6458 |
| No log | 4.83 | 68 | 0.6864 | 0.5208 |
| No log | 7.28 | 102 | 0.6596 | 0.6042 |
| 0.7106 | 9.69 | 136 | 0.6208 | 0.6875 |
| 0.7106 | 12.14 | 170 | 0.6152 | 0.6875 |
| 0.7106 | 14.55 | 204 | 0.6167 | 0.6875 |
| 0.6464 | 16.97 | 238 | 0.5782 | 0.7708 |
| 0.6464 | 19.41 | 272 | 0.6011 | 0.7292 |
| 0.6464 | 21.83 | 306 | 0.5688 | 0.7917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cahya/wav2vec2-base-turkish-cv8
|
cahya
| 2022-02-04T14:30:19Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.2836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 |
| 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 |
| 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 |
| 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 |
| 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 |
| 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 |
| 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 |
| 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 |
| 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 |
| 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 |
| 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 |
| 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 |
| 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 |
| 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 |
| 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 |
| 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 |
| 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 |
| 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 |
| 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 |
| 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 |
| 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 |
| 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 |
| 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 |
| 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 |
| 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 |
| 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 |
| 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 |
| 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 |
| 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 |
| 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 |
| 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 |
| 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 |
| 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 |
| 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 |
| 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 |
| 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 |
| 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 |
| 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 |
| 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 |
| 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 |
| 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 |
| 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 |
| 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 |
| 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 |
| 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 |
| 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 |
| 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 |
| 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 |
| 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 |
| 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 |
| 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 |
| 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 |
| 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 |
| 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 |
| 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 |
| 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 |
| 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 |
| 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 |
| 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 |
| 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 |
| 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 |
| 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 |
| 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 |
| 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 |
| 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Language-Media-Lab/mt5-small-ain-jpn-mt
|
Language-Media-Lab
| 2022-02-04T13:20:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Language-Media-Lab/byt5-small-ain-jpn-mt
|
Language-Media-Lab
| 2022-02-04T13:03:14Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ain",
"ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- ain
- ja
tags:
- translation
---
Byt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Plim/xls-r-1b-fr
|
Plim
| 2022-02-04T11:45:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2464
- Wer: 0.2220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 |
| 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 |
| 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 |
| 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 |
| 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 |
| 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 |
| 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 |
| 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 |
| 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 |
| 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 |
| 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 |
| 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 |
| 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 |
| 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 |
| 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Yanzhu/bertweetfr_offensiveness
|
Yanzhu
| 2022-02-04T11:42:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
French roBERTa-base model fine-tuned for Offensive Language Identification on COVID-19 tweets.
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1
|
Subhashini17
| 2022-02-04T11:14:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ai-forever/bert-base-NER-reptile-5-datasets
|
ai-forever
| 2022-02-04T10:51:07Z | 38 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"PyTorch",
"en",
"dataset:conll2003",
"dataset:wnut_17",
"dataset:jnlpba",
"dataset:conll2012",
"dataset:BTC",
"dataset:dfki-nlp/few-nerd",
"arxiv:2010.02405",
"model-index",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
inference: false
pipeline_tag: false
datasets:
- conll2003
- wnut_17
- jnlpba
- conll2012
- BTC
- dfki-nlp/few-nerd
tags:
- PyTorch
model-index:
- name: "bert-base-NER-reptile-5-datasets"
results:
- task:
name: few-shot-ner
type: named-entity-recognition
dataset:
name: few-nerd-inter
type: named-entity-recognition
metrics:
- name: 5 way 1~2 shot
type: f1
value: 56.12
- name: 5-way 5~10-shot
type: f1
value: 62.7
- name: 10-way 1~2-shot
type: f1
value: 50.3
- name: 10-way 5~10-shot
type: f1
value: 58.82
---
# BERT base uncased model pre-trained on 5 NER datasets
Model was trained by _SberIDP_. The pretraining process and technical details are described [in this article](https://habr.com/ru/company/sberbank/blog/649609/).
* Task: Named Entity Recognition
* Base model: [bert-base-uncased](https://huggingface.co/bert-base-uncased)
* Training Data is 5 datasets: [CoNLL-2003](https://aclanthology.org/W03-0419.pdf), [WNUT17](http://noisy-text.github.io/2017/emerging-rare-entities.html), [JNLPBA](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004), [CoNLL-2012 (OntoNotes)](https://aclanthology.org/W12-4501.pdf), [BTC](https://www.derczynski.com/papers/btc.pdf)
* Testing was made in Few-Shot scenario on [Few-NERD dataset](https://github.com/thunlp/Few-NERD) using the model as a backbone for [StructShot](https://arxiv.org/abs/2010.02405)
The model is pretrained for NER task using [Reptile](https://openai.com/blog/reptile/) and can be finetuned for new entities with only a small amount of samples.
|
yohida/yoshida_gpt
|
yohida
| 2022-02-04T10:13:45Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt2",
"text-generation",
"ja",
"japanese",
"gpt",
"lm",
"nlp",
"dataset:cc100",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- gpt
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
widget:
- text: "西田幾多郎は、"
---
# japanese-gpt-1b

This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b")
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=100,
min_length=100,
do_sample=True,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_word_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
huggingtweets/dril-drilbot_neo-jril_bot
|
huggingtweets
| 2022-02-04T09:52:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dril-drilbot_neo-jril_bot/1643968320729/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468502340634296326/gbl8-ltv_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Jril & wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@dril-drilbot_neo-jril_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Jril & wintbot_neo.
| Data | wint | Jril | wintbot_neo |
| --- | --- | --- | --- |
| Tweets downloaded | 3228 | 113 | 3241 |
| Retweets | 475 | 0 | 315 |
| Short tweets | 305 | 0 | 453 |
| Tweets kept | 2448 | 113 | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27nmrlyy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-drilbot_neo-jril_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-drilbot_neo-jril_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MaggieXM/deberta-base-finetuned-squad
|
MaggieXM
| 2022-02-04T09:41:38Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: deberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.0001
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.0 | 2 | 5.3843 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/dril-heroicvillain95
|
huggingtweets
| 2022-02-04T08:49:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1402535431523217411/h07KN7VS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & casually Jesse</div>
<div style="text-align: center; font-size: 14px;">@dril-heroicvillain95</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & casually Jesse.
| Data | wint | casually Jesse |
| --- | --- | --- |
| Tweets downloaded | 3228 | 2663 |
| Retweets | 475 | 133 |
| Short tweets | 305 | 353 |
| Tweets kept | 2448 | 2177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3u36b2x8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-heroicvillain95's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-heroicvillain95')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Mapcar/pegasus-samsum
|
Mapcar
| 2022-02-04T03:27:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SkelterLabsInc/bert-base-japanese-jaquad
|
SkelterLabsInc
| 2022-02-04T02:39:25Z | 87 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"extractive-qa",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"arxiv:2202.01764",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-3.0
language: ja
tags:
- question-answering
- extractive-qa
pipeline_tag:
- None
datasets:
- SkelterLabsInc/JaQuAD
metrics:
- Exact match
- F1 score
---
# BERT base Japanese - JaQuAD
## Description
A Japanese Question Answering model fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD).
Please refer [BERT base Japanese](https://huggingface.co/cl-tohoku/bert-base-japanese) for details about the pre-training model.
The codes for the fine-tuning are available at [SkelterLabsInc/JaQuAD](https://github.com/SkelterLabsInc/JaQuAD)
## Evaluation results
On the development set.
```shell
{"f1": 77.35, "exact_match": 61.01}
```
On the test set.
```shell
{"f1": 78.92, "exact_match": 63.38}
```
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
question = 'アレクサンダー・グラハム・ベルは、どこで生まれたの?'
context = 'アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。'
model = AutoModelForQuestionAnswering.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
tokenizer = AutoTokenizer.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
inputs = tokenizer(
question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score.
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score.
# 1 is added to `answer_end` because the index pointed by score is inclusive.
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
# answer = 'スコットランド'
```
## License
The fine-tuned model is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
## Citation
```bibtex
@misc{so2022jaquad,
title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}},
author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho},
year={2022},
eprint={2202.01764},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ghofrani/common7
|
ghofrani
| 2022-02-04T01:32:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fa",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fa
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: common7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# common7
This model is a fine-tuned version of [common7/checkpoint-18500](https://huggingface.co/common7/checkpoint-18500) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3448
- Wer: 0.3478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 150.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.957 | 3.29 | 500 | 2.9503 | 1.0 |
| 1.7225 | 6.58 | 1000 | 0.8860 | 0.7703 |
| 1.4907 | 9.86 | 1500 | 0.6555 | 0.6673 |
| 1.4177 | 13.16 | 2000 | 0.5784 | 0.6076 |
| 1.3425 | 16.45 | 2500 | 0.5379 | 0.5718 |
| 1.33 | 19.73 | 3000 | 0.4962 | 0.5245 |
| 1.4378 | 23.03 | 3500 | 0.4699 | 0.5098 |
| 1.1894 | 26.31 | 4000 | 0.4527 | 0.4848 |
| 1.1844 | 29.6 | 4500 | 0.4309 | 0.4651 |
| 1.1795 | 32.89 | 5000 | 0.4131 | 0.4524 |
| 1.1471 | 36.18 | 5500 | 0.4052 | 0.4435 |
| 1.1337 | 39.47 | 6000 | 0.3927 | 0.4363 |
| 1.1896 | 42.76 | 6500 | 0.3811 | 0.4254 |
| 1.1847 | 46.05 | 7000 | 0.3855 | 0.4129 |
| 0.9954 | 49.34 | 7500 | 0.3729 | 0.3981 |
| 1.0293 | 52.63 | 8000 | 0.3637 | 0.4014 |
| 1.0224 | 55.92 | 8500 | 0.3578 | 0.3885 |
| 1.012 | 59.21 | 9000 | 0.3629 | 0.3930 |
| 1.0772 | 62.5 | 9500 | 0.3635 | 0.3906 |
| 1.0344 | 65.79 | 10000 | 0.3469 | 0.3771 |
| 0.9457 | 69.08 | 10500 | 0.3435 | 0.3735 |
| 0.9307 | 72.37 | 11000 | 0.3519 | 0.3762 |
| 0.9523 | 75.65 | 11500 | 0.3443 | 0.3666 |
| 0.9523 | 78.94 | 12000 | 0.3502 | 0.3757 |
| 0.9475 | 82.24 | 12500 | 0.3509 | 0.3643 |
| 0.9971 | 85.52 | 13000 | 0.3502 | 0.3626 |
| 0.9058 | 88.81 | 13500 | 0.3472 | 0.3605 |
| 0.8922 | 92.1 | 14000 | 0.3530 | 0.3618 |
| 0.9 | 95.39 | 14500 | 0.3500 | 0.3574 |
| 0.9051 | 98.68 | 15000 | 0.3456 | 0.3535 |
| 0.9304 | 101.97 | 15500 | 0.3438 | 0.3578 |
| 0.9433 | 105.26 | 16000 | 0.3396 | 0.3530 |
| 0.8988 | 108.55 | 16500 | 0.3436 | 0.3539 |
| 0.8789 | 111.84 | 17000 | 0.3426 | 0.3516 |
| 0.8667 | 115.13 | 17500 | 0.3438 | 0.3506 |
| 0.8895 | 118.42 | 18000 | 0.3434 | 0.3503 |
| 0.8888 | 121.71 | 18500 | 0.3425 | 0.3494 |
| 0.9453 | 125.0 | 19000 | 0.3415 | 0.3480 |
| 0.9267 | 128.29 | 19500 | 0.3477 | 0.3503 |
| 0.8315 | 131.58 | 20000 | 0.3476 | 0.3505 |
| 0.8542 | 134.86 | 20500 | 0.3475 | 0.3506 |
| 0.8478 | 138.16 | 21000 | 0.3430 | 0.3481 |
| 0.8643 | 141.45 | 21500 | 0.3451 | 0.3485 |
| 0.8705 | 144.73 | 22000 | 0.3444 | 0.3474 |
| 0.9869 | 148.03 | 22500 | 0.3441 | 0.3493 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
am-shb/bert-base-multilingual-cased-finetuned
|
am-shb
| 2022-02-03T21:59:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: '57426955'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57426955
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
hyesunyun/NonsenseUpdateDiffIntBart
|
hyesunyun
| 2022-02-03T17:14:33Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"diff generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- diff generation
datasets:
- nonsense corpus
metrics:
- rouge
---
hello! this is the pretrained BART. The dataset used for pretraining is nonsense summary corpus with output as diff.
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3
|
tomascufaro
| 2022-02-03T15:57:54Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"robust-speech-event",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- "es"
- "robust-speech-event"
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small-v3
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Wer: 0.1980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2372 | 0.26 | 400 | 0.3011 | 0.2660 |
| 0.3413 | 0.53 | 800 | 0.3559 | 0.3228 |
| 0.3598 | 0.79 | 1200 | 0.3753 | 0.3400 |
| 0.3529 | 1.05 | 1600 | 0.3385 | 0.2979 |
| 0.3133 | 1.32 | 2000 | 0.3559 | 0.3056 |
| 0.3158 | 1.58 | 2400 | 0.3364 | 0.2994 |
| 0.3092 | 1.85 | 2800 | 0.3210 | 0.2876 |
| 0.2919 | 2.11 | 3200 | 0.3460 | 0.3010 |
| 0.2666 | 2.37 | 3600 | 0.3543 | 0.3036 |
| 0.2819 | 2.64 | 4000 | 0.3477 | 0.2959 |
| 0.283 | 2.9 | 4400 | 0.3492 | 0.2968 |
| 0.2484 | 3.16 | 4800 | 0.3647 | 0.2993 |
| 0.2371 | 3.43 | 5200 | 0.3601 | 0.2942 |
| 0.2382 | 3.69 | 5600 | 0.3656 | 0.3019 |
| 0.2425 | 3.96 | 6000 | 0.3379 | 0.2873 |
| 0.2092 | 4.22 | 6400 | 0.3385 | 0.2736 |
| 0.2171 | 4.48 | 6800 | 0.3503 | 0.2889 |
| 0.2185 | 4.75 | 7200 | 0.3289 | 0.2727 |
| 0.2236 | 5.01 | 7600 | 0.3447 | 0.2771 |
| 0.1882 | 5.27 | 8000 | 0.3586 | 0.2860 |
| 0.1986 | 5.54 | 8400 | 0.3404 | 0.2829 |
| 0.2055 | 5.8 | 8800 | 0.3561 | 0.2869 |
| 0.196 | 6.06 | 9200 | 0.3633 | 0.2811 |
| 0.1748 | 6.33 | 9600 | 0.3703 | 0.2818 |
| 0.1758 | 6.59 | 10000 | 0.3525 | 0.2816 |
| 0.1819 | 6.86 | 10400 | 0.3581 | 0.2765 |
| 0.1715 | 7.12 | 10800 | 0.3480 | 0.2628 |
| 0.1606 | 7.38 | 11200 | 0.3490 | 0.2703 |
| 0.1632 | 7.65 | 11600 | 0.3461 | 0.2706 |
| 0.1638 | 7.91 | 12000 | 0.3458 | 0.2673 |
| 0.1552 | 8.17 | 12400 | 0.3646 | 0.2732 |
| 0.154 | 8.44 | 12800 | 0.3706 | 0.2726 |
| 0.1512 | 8.7 | 13200 | 0.3609 | 0.2683 |
| 0.149 | 8.97 | 13600 | 0.3610 | 0.2668 |
| 0.1357 | 9.23 | 14000 | 0.3693 | 0.2740 |
| 0.1375 | 9.49 | 14400 | 0.3677 | 0.2625 |
| 0.1391 | 9.76 | 14800 | 0.3795 | 0.2762 |
| 0.1378 | 10.02 | 15200 | 0.3541 | 0.2592 |
| 0.1197 | 10.28 | 15600 | 0.3562 | 0.2507 |
| 0.1259 | 10.55 | 16000 | 0.3612 | 0.2584 |
| 0.1266 | 10.81 | 16400 | 0.3470 | 0.2527 |
| 0.1199 | 11.07 | 16800 | 0.3721 | 0.2571 |
| 0.1157 | 11.34 | 17200 | 0.3734 | 0.2571 |
| 0.1107 | 11.6 | 17600 | 0.3730 | 0.2589 |
| 0.1148 | 11.87 | 18000 | 0.3648 | 0.2536 |
| 0.1095 | 12.13 | 18400 | 0.3746 | 0.2521 |
| 0.1047 | 12.39 | 18800 | 0.3566 | 0.2530 |
| 0.1043 | 12.66 | 19200 | 0.3794 | 0.2545 |
| 0.1066 | 12.92 | 19600 | 0.3548 | 0.2439 |
| 0.0974 | 13.18 | 20000 | 0.3702 | 0.2461 |
| 0.0978 | 13.45 | 20400 | 0.3721 | 0.2492 |
| 0.095 | 13.71 | 20800 | 0.3599 | 0.2467 |
| 0.0963 | 13.97 | 21200 | 0.3650 | 0.2402 |
| 0.0902 | 14.24 | 21600 | 0.3689 | 0.2459 |
| 0.0898 | 14.5 | 22000 | 0.3832 | 0.2452 |
| 0.0865 | 14.77 | 22400 | 0.3982 | 0.2436 |
| 0.0911 | 15.03 | 22800 | 0.3785 | 0.2398 |
| 0.0793 | 15.29 | 23200 | 0.3731 | 0.2396 |
| 0.0806 | 15.56 | 23600 | 0.3626 | 0.2372 |
| 0.0789 | 15.82 | 24000 | 0.3707 | 0.2356 |
| 0.0779 | 16.08 | 24400 | 0.3850 | 0.2368 |
| 0.078 | 16.35 | 24800 | 0.3831 | 0.2363 |
| 0.0732 | 16.61 | 25200 | 0.3947 | 0.2287 |
| 0.0733 | 16.88 | 25600 | 0.3928 | 0.2374 |
| 0.0721 | 17.14 | 26000 | 0.3943 | 0.2324 |
| 0.0676 | 17.4 | 26400 | 0.3793 | 0.2311 |
| 0.0682 | 17.67 | 26800 | 0.3958 | 0.2257 |
| 0.0714 | 17.93 | 27200 | 0.3890 | 0.2322 |
| 0.0673 | 18.19 | 27600 | 0.3872 | 0.2229 |
| 0.0613 | 18.46 | 28000 | 0.3828 | 0.2226 |
| 0.0621 | 18.72 | 28400 | 0.3812 | 0.2214 |
| 0.0622 | 18.98 | 28800 | 0.3919 | 0.2212 |
| 0.0576 | 19.25 | 29200 | 0.4000 | 0.2205 |
| 0.0581 | 19.51 | 29600 | 0.3953 | 0.2203 |
| 0.0573 | 19.78 | 30000 | 0.3947 | 0.2190 |
| 0.0576 | 20.04 | 30400 | 0.3909 | 0.2156 |
| 0.0551 | 20.3 | 30800 | 0.4178 | 0.2153 |
| 0.0525 | 20.57 | 31200 | 0.3935 | 0.2152 |
| 0.0522 | 20.83 | 31600 | 0.4054 | 0.2151 |
| 0.0519 | 21.09 | 32000 | 0.3877 | 0.2135 |
| 0.0479 | 21.36 | 32400 | 0.4119 | 0.2107 |
| 0.0472 | 21.62 | 32800 | 0.3967 | 0.2091 |
| 0.048 | 21.89 | 33200 | 0.3812 | 0.2057 |
| 0.0458 | 22.15 | 33600 | 0.3931 | 0.2043 |
| 0.0459 | 22.41 | 34000 | 0.3937 | 0.2049 |
| 0.0448 | 22.68 | 34400 | 0.3900 | 0.2056 |
| 0.0432 | 22.94 | 34800 | 0.4050 | 0.2049 |
| 0.0425 | 23.2 | 35200 | 0.3985 | 0.2014 |
| 0.0415 | 23.47 | 35600 | 0.3976 | 0.2013 |
| 0.0403 | 23.73 | 36000 | 0.4031 | 0.2018 |
| 0.04 | 23.99 | 36400 | 0.3996 | 0.2000 |
| 0.039 | 24.26 | 36800 | 0.3977 | 0.1993 |
| 0.0406 | 24.52 | 37200 | 0.3967 | 0.2000 |
| 0.0391 | 24.79 | 37600 | 0.3986 | 0.1980 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Ayham/bert_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-03T13:33:41Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm
|
anuragshas
| 2022-02-03T12:28:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6864
- Wer: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3322 | 14.81 | 400 | 3.7450 | 1.0 |
| 3.2662 | 29.63 | 800 | 3.2571 | 0.9996 |
| 1.6408 | 44.44 | 1200 | 0.9098 | 0.8162 |
| 1.2289 | 59.26 | 1600 | 0.6757 | 0.7099 |
| 1.0551 | 74.07 | 2000 | 0.6417 | 0.7044 |
| 0.966 | 88.89 | 2400 | 0.6365 | 0.6789 |
| 0.8713 | 103.7 | 2800 | 0.6617 | 0.6954 |
| 0.8055 | 118.52 | 3200 | 0.6371 | 0.6762 |
| 0.7489 | 133.33 | 3600 | 0.6798 | 0.6911 |
| 0.7073 | 148.15 | 4000 | 0.6567 | 0.6731 |
| 0.6609 | 162.96 | 4400 | 0.6742 | 0.6840 |
| 0.6435 | 177.78 | 4800 | 0.6862 | 0.6633 |
| 0.6282 | 192.59 | 5200 | 0.6865 | 0.6731 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
Baybars/wav2vec2-xls-r-1b-turkish
|
Baybars
| 2022-02-03T10:09:31Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-10500](https://huggingface.co/./checkpoint-10500) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7540
- Wer: 0.4647
- Cer: 0.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.999,0.9999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:------:|:---------------:|:------:|
| 1.0779 | 4.59 | 500 | 0.2354 | 0.8260 | 0.7395 |
| 0.7573 | 9.17 | 1000 | 0.2100 | 0.7544 | 0.6960 |
| 0.8225 | 13.76 | 1500 | 0.2021 | 0.6867 | 0.6672 |
| 0.621 | 18.35 | 2000 | 0.1874 | 0.6824 | 0.6209 |
| 0.6362 | 22.94 | 2500 | 0.1904 | 0.6712 | 0.6286 |
| 0.624 | 27.52 | 3000 | 0.1820 | 0.6940 | 0.6116 |
| 0.4781 | 32.11 | 3500 | 0.1735 | 0.6966 | 0.5989 |
| 0.5685 | 36.7 | 4000 | 0.1769 | 0.6742 | 0.5971 |
| 0.4384 | 41.28 | 4500 | 0.1767 | 0.6904 | 0.5999 |
| 0.5509 | 45.87 | 5000 | 0.1692 | 0.6734 | 0.5641 |
| 0.3665 | 50.46 | 5500 | 0.1680 | 0.7018 | 0.5662 |
| 0.3914 | 55.05 | 6000 | 0.1631 | 0.7121 | 0.5552 |
| 0.2467 | 59.63 | 6500 | 0.1563 | 0.6657 | 0.5374 |
| 0.2576 | 64.22 | 7000 | 0.1554 | 0.6920 | 0.5316 |
| 0.2711 | 68.81 | 7500 | 0.1495 | 0.6900 | 0.5176 |
| 0.2626 | 73.39 | 8000 | 0.1454 | 0.6843 | 0.5043 |
| 0.1377 | 77.98 | 8500 | 0.1470 | 0.7383 | 0.5101 |
| 0.2005 | 82.57 | 9000 | 0.1430 | 0.7228 | 0.5045 |
| 0.1355 | 87.16 | 9500 | 0.1375 | 0.7231 | 0.4869 |
| 0.0431 | 91.74 | 10000 | 0.1350 | 0.7397 | 0.4749 |
| 0.0586 | 96.33 | 10500 | 0.1339 | 0.7360 | 0.4754 |
| 0.0896 | 100.92 | 11000 | 0.7187 | 0.4885 | 0.1398 |
| 0.183 | 105.5 | 11500 | 0.7310 | 0.4838 | 0.1392 |
| 0.0963 | 110.09 | 12000 | 0.7643 | 0.4759 | 0.1362 |
| 0.0437 | 114.68 | 12500 | 0.7525 | 0.4641 | 0.1328 |
| 0.1122 | 119.27 | 13000 | 0.7535 | 0.4651 | 0.1317 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Atiqah/Atiqah
|
Atiqah
| 2022-02-03T07:04:44Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: artistic-2.0
---
|
pritoms/distilroberta-base-YTTranscript23
|
pritoms
| 2022-02-03T05:52:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-YTTranscript23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-YTTranscript23
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 2.9007 |
| No log | 2.0 | 140 | 2.9651 |
| No log | 3.0 | 210 | 2.9374 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
testimonial/wav2vec2-base-timit-demo-colab
|
testimonial
| 2022-02-03T03:07:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Wer: 0.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4156 | 4.0 | 500 | 1.2721 | 0.8882 |
| 0.6145 | 8.0 | 1000 | 0.4712 | 0.4510 |
| 0.229 | 12.0 | 1500 | 0.4459 | 0.3847 |
| 0.1312 | 16.0 | 2000 | 0.4739 | 0.3786 |
| 0.0897 | 20.0 | 2500 | 0.4483 | 0.3562 |
| 0.0608 | 24.0 | 3000 | 0.4450 | 0.3502 |
| 0.0456 | 28.0 | 3500 | 0.4688 | 0.3417 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Ayham/albert_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-02T23:15:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
dropout05/t5-tiny
|
dropout05
| 2022-02-02T19:11:43Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
jonfd/convbert-base-igc-is
|
jonfd
| 2022-02-02T17:10:34Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ConvBERT-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
cahya/wav2vec2-base-turkish-artificial
|
cahya
| 2022-02-02T15:44:36Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish with Artificial Voices by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 57.60
---
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.60 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2
|
arjuntheprogrammer
| 2022-02-02T15:16:39Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-multilingual-cased-sentiment-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.7614
- name: F1
type: f1
value: 0.7614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-OH
|
NbAiLab
| 2022-02-02T06:10:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-300M-NPSC-OH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
tal-yifat/bert-injury-classifier
|
tal-yifat
| 2022-02-02T04:35:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-injury-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-injury-classifier
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.5298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6676 | 1.0 | 19026 | 0.6635 | 0.6216 |
| 0.6915 | 2.0 | 38052 | 0.6915 | 0.5298 |
| 0.6924 | 3.0 | 57078 | 0.6915 | 0.5298 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
CalvinHuang/mt5-small-finetuned-amazon-en-es
|
CalvinHuang
| 2022-02-02T03:50:37Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.2936
- Rouge2: 8.0678
- Rougel: 16.8129
- Rougelsum: 16.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
pdroberts/distilbert-base-uncased-finetuned-emotion
|
pdroberts
| 2022-02-01T23:48:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/cosmonolan
|
huggingtweets
| 2022-02-01T21:59:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cosmonolan/1643752768713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436186613667622920/PQrOPSrV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nolan Koblischke</div>
<div style="text-align: center; font-size: 14px;">@cosmonolan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nolan Koblischke.
| Data | Nolan Koblischke |
| --- | --- |
| Tweets downloaded | 154 |
| Retweets | 5 |
| Short tweets | 6 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13msto5g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cosmonolan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25mhxfie) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25mhxfie/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cosmonolan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mattmcclean/distilbert-base-uncased-finetuned-emotion
|
mattmcclean
| 2022-02-01T19:48:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9252235175634111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2173
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.825 | 1.0 | 250 | 0.2925 | 0.915 | 0.9134 |
| 0.2444 | 2.0 | 500 | 0.2173 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
conjuring92/distilroberta-base-finetuned-toxic
|
conjuring92
| 2022-02-01T18:24:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-toxic
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5338 | 1.0 | 313 | 2.3127 |
| 2.4482 | 2.0 | 626 | 2.2985 |
| 2.4312 | 3.0 | 939 | 2.2411 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0
- Datasets 1.18.1
- Tokenizers 0.10.3
|
naleraphael/rasr_sample
|
naleraphael
| 2022-02-01T18:18:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: rasr_sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rasr_sample
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3147
- Wer: 0.2676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3332 | 1.45 | 500 | 3.3031 | 1.0 |
| 2.9272 | 2.91 | 1000 | 2.9353 | 0.9970 |
| 2.0736 | 4.36 | 1500 | 1.1565 | 0.8714 |
| 1.7339 | 5.81 | 2000 | 0.7156 | 0.6688 |
| 1.5989 | 7.27 | 2500 | 0.5791 | 0.5519 |
| 1.4916 | 8.72 | 3000 | 0.5038 | 0.5169 |
| 1.4562 | 10.17 | 3500 | 0.4861 | 0.4805 |
| 1.3893 | 11.63 | 4000 | 0.4584 | 0.4761 |
| 1.3797 | 13.08 | 4500 | 0.4298 | 0.4686 |
| 1.3508 | 14.53 | 5000 | 0.4138 | 0.3744 |
| 1.3165 | 15.99 | 5500 | 0.4015 | 0.3578 |
| 1.281 | 17.44 | 6000 | 0.3883 | 0.3472 |
| 1.2682 | 18.89 | 6500 | 0.3904 | 0.3434 |
| 1.2477 | 20.35 | 7000 | 0.3726 | 0.3321 |
| 1.2364 | 21.8 | 7500 | 0.3685 | 0.3281 |
| 1.2041 | 23.26 | 8000 | 0.3597 | 0.3194 |
| 1.1901 | 24.71 | 8500 | 0.3542 | 0.3203 |
| 1.1903 | 26.16 | 9000 | 0.3500 | 0.3138 |
| 1.1677 | 27.61 | 9500 | 0.3458 | 0.3067 |
| 1.1718 | 29.07 | 10000 | 0.3595 | 0.3112 |
| 1.1562 | 30.52 | 10500 | 0.3433 | 0.3022 |
| 1.1392 | 31.97 | 11000 | 0.3440 | 0.2936 |
| 1.1258 | 33.43 | 11500 | 0.3396 | 0.2950 |
| 1.1067 | 34.88 | 12000 | 0.3379 | 0.2939 |
| 1.0953 | 36.34 | 12500 | 0.3370 | 0.2868 |
| 1.0835 | 37.79 | 13000 | 0.3317 | 0.2860 |
| 1.0772 | 39.24 | 13500 | 0.3302 | 0.2854 |
| 1.0853 | 40.7 | 14000 | 0.3265 | 0.2783 |
| 1.0689 | 42.15 | 14500 | 0.3306 | 0.2770 |
| 1.0394 | 43.6 | 15000 | 0.3233 | 0.2757 |
| 1.0581 | 45.06 | 15500 | 0.3199 | 0.2713 |
| 1.0362 | 46.51 | 16000 | 0.3154 | 0.2683 |
| 1.0406 | 47.96 | 16500 | 0.3176 | 0.2688 |
| 1.0082 | 49.42 | 17000 | 0.3149 | 0.2679 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
patrickvonplaten/wav2vec2-common_voice-tamil
|
patrickvonplaten
| 2022-02-01T14:17:40Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tamil
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1172
- Wer: 1.0070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.84 | 100 | 4.0148 | 1.0 |
| No log | 1.69 | 200 | 3.1738 | 1.0 |
| No log | 2.54 | 300 | 2.5980 | 1.0236 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.