MaxLSB commited on
Commit
d8955a4
·
verified ·
1 Parent(s): 9a3c6c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -6
README.md CHANGED
@@ -6,7 +6,9 @@ language:
6
  - en
7
  pipeline_tag: automatic-speech-recognition
8
  ---
9
- <hr>
 
 
10
  <div align="center" style="line-height: 1;">
11
  <a href="https://github.com/augustgw/early-exit-transformer" target="_blank" style="margin: 2px;">
12
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Splitformer-181717?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
@@ -19,11 +21,6 @@ pipeline_tag: automatic-speech-recognition
19
  </a>
20
  </div>
21
 
22
- ---
23
-
24
- # SplitFormer
25
-
26
- ## Overview
27
  Splitformer is a Conformer-based ASR model (36.7M parameters) trained from scratch on 1000 hours of the LibriSpeech dataset with an early‐exit objective.
28
 
29
  This architecture introduces parallel downsampling layers before the first and last exits to improve performance with minimal extra overhead, while retaining inference speed.
 
6
  - en
7
  pipeline_tag: automatic-speech-recognition
8
  ---
9
+
10
+ # SplitFormer
11
+
12
  <div align="center" style="line-height: 1;">
13
  <a href="https://github.com/augustgw/early-exit-transformer" target="_blank" style="margin: 2px;">
14
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Splitformer-181717?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
 
21
  </a>
22
  </div>
23
 
 
 
 
 
 
24
  Splitformer is a Conformer-based ASR model (36.7M parameters) trained from scratch on 1000 hours of the LibriSpeech dataset with an early‐exit objective.
25
 
26
  This architecture introduces parallel downsampling layers before the first and last exits to improve performance with minimal extra overhead, while retaining inference speed.