awacke1 commited on
Commit
01348d9
Β·
1 Parent(s): 2296bb4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -2
README.md CHANGED
@@ -3,10 +3,31 @@ title: GradioFlanT5BloomAndTaskSource
3
  emoji: πŸ“Š
4
  colorFrom: green
5
  colorTo: gray
6
- sdk: docker
 
7
  app_file: app.py
8
  pinned: false
9
  license: mit
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  emoji: πŸ“Š
4
  colorFrom: green
5
  colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: 3.18.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
  ---
12
 
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
+
15
+ The Flan-T5 model is a variant of the T5 (Text-to-Text Transfer Transformer) model, which is a large-scale neural network architecture designed for a wide range of natural language processing tasks, including language translation, question-answering, and summarization, among others.
16
+
17
+ The Flan-T5 model was trained on a combination of several large-scale datasets, including:
18
+
19
+ C4: The Colossal Clean Crawled Corpus - a large, multilingual dataset of web pages collected by crawling the internet, containing over 700GB of text in more than 100 languages.
20
+
21
+ Wikipedia: A dataset of text extracted from Wikipedia articles in various languages.
22
+
23
+ Common Crawl News: A dataset of news articles collected from various news sources.
24
+
25
+ BooksCorpus: A large dataset of text passages extracted from over 11,000 books, containing over 800 million words.
26
+
27
+ OpenWebText: A dataset of text scraped from web pages, similar to C4.
28
+
29
+ WebText: A smaller dataset of web pages containing around 40GB of text.
30
+
31
+ English-language books and articles from the JSTOR database.
32
+
33
+ These datasets were preprocessed and used to train the Flan-T5 model on a range of natural language processing tasks, including text classification, question-answering, and summarization, among others. The Flan-T5 model has been fine-tuned on various downstream tasks, including sentiment analysis, summarization, and language translation.