sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ea9393cfb9539b84936a5900d6bc79ac5f8d4262
|
# indspeech_news_lvcsr
This is the first Indonesian speech dataset for large vocabulary continuous speech recognition (LVCSR) with more than 40 hours of speech and 400 speakers [Sakti et al., 2008]. R&D Division of PT Telekomunikasi Indonesia (TELKOMRisTI) developed the data in 2005-2006, in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan, as the continuation of the Asia-Pacific Telecommunity (APT) project [Sakti et al., 2004]. It has also been successfully used for developing Indonesian LVCSR in the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{sakti-tcast-2008,
title = "Development of {I}ndonesian Large Vocabulary Continuous Speech Recognition System within {A-STAR} Project",
author = "Sakti, Sakriani and Kelana, Eka and Riza, Hammam and Sakai, Shinsuke and Markov, Konstantin and Nakamura, Satoshi",
booktitle = "Proc. IJCNLP Workshop on Technologies and Corpora for Asia-Pacific Speech Translation (TCAST)",
year = "2008",
pages = "19--24"
address = "Hyderabad, India"
}
@inproceedings{sakti-icslp-2004,
title = "Indonesian Speech Recognition for Hearing and Speaking Impaired People",
author = "Sakti, Sakriani and Hutagaol, Paulus and Arman, Arry Akhmad and Nakamura, Satoshi",
booktitle = "Proc. International Conference on Spoken Language Processing (INTERSPEECH - ICSLP)",
year = "2004",
pages = "1037--1040"
address = "Jeju Island, Korea"
}
@article{sakti-s2st-csl-2013,
title = "{A-STAR}: Toward Translating Asian Spoken Languages",
author = "Sakti, Sakriani and Paul, Michael and Finch, Andrew and Sakai, Shinsuke and Thang, Tat Vu, and Kimura, Noriyuki and Hori, Chiori and Sumita, Eiichiro and Nakamura, Satoshi and Park, Jun and Wutiwiwatchai, Chai and Xu, Bo and Riza, Hammam and Arora, Karunesh and Luong, Chi Mai and Li, Haizhou",
journal = "Special issue on Speech-to-Speech Translation, Computer Speech and Language Journal",
volume = "27",
number ="2",
pages = "509--527",
year = "2013",
publisher = "Elsevier"
}
```
## License
CC BY-NC-SA 4.0
## Homepage
[https://github.com/s-sakti/data_indsp_news_lvcsr](https://github.com/s-sakti/data_indsp_news_lvcsr)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indspeech_news_lvcsr
|
[
"language:ind",
"speech-recognition",
"region:us"
] |
2023-09-26T10:16:08+00:00
|
{"language": ["ind"], "tags": ["speech-recognition"]}
|
2023-09-26T11:31:52+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #speech-recognition #region-us
|
# indspeech_news_lvcsr
This is the first Indonesian speech dataset for large vocabulary continuous speech recognition (LVCSR) with more than 40 hours of speech and 400 speakers [Sakti et al., 2008]. R&D Division of PT Telekomunikasi Indonesia (TELKOMRisTI) developed the data in 2005-2006, in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan, as the continuation of the Asia-Pacific Telecommunity (APT) project [Sakti et al., 2004]. It has also been successfully used for developing Indonesian LVCSR in the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC BY-NC-SA 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indspeech_news_lvcsr\n\nThis is the first Indonesian speech dataset for large vocabulary continuous speech recognition (LVCSR) with more than 40 hours of speech and 400 speakers [Sakti et al., 2008]. R&D Division of PT Telekomunikasi Indonesia (TELKOMRisTI) developed the data in 2005-2006, in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan, as the continuation of the Asia-Pacific Telecommunity (APT) project [Sakti et al., 2004]. It has also been successfully used for developing Indonesian LVCSR in the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC BY-NC-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #speech-recognition #region-us \n",
"# indspeech_news_lvcsr\n\nThis is the first Indonesian speech dataset for large vocabulary continuous speech recognition (LVCSR) with more than 40 hours of speech and 400 speakers [Sakti et al., 2008]. R&D Division of PT Telekomunikasi Indonesia (TELKOMRisTI) developed the data in 2005-2006, in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan, as the continuation of the Asia-Pacific Telecommunity (APT) project [Sakti et al., 2004]. It has also been successfully used for developing Indonesian LVCSR in the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC BY-NC-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
159,
35,
9,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #speech-recognition #region-us \n# indspeech_news_lvcsr\n\nThis is the first Indonesian speech dataset for large vocabulary continuous speech recognition (LVCSR) with more than 40 hours of speech and 400 speakers [Sakti et al., 2008]. R&D Division of PT Telekomunikasi Indonesia (TELKOMRisTI) developed the data in 2005-2006, in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan, as the continuation of the Asia-Pacific Telecommunity (APT) project [Sakti et al., 2004]. It has also been successfully used for developing Indonesian LVCSR in the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC BY-NC-SA 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
f15555dcb6d63ebfa8b9bce0de04a1f241f85944
|
# kopi_nllb
KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
Hefferman et al, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages. Arxiv https://arxiv.org/abs/2205.12654, 2022.
NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv https://arxiv.org/abs/2207.04672, 2022.
```
## License
ODC_C
## Homepage
[https://huggingface.co/datasets/munggok/KoPI-NLLB](https://huggingface.co/datasets/munggok/KoPI-NLLB)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/kopi_nllb
|
[
"language:ind",
"language:jav",
"language:ace",
"language:ban",
"language:bjn",
"language:min",
"language:sun",
"self-supervised-pretraining",
"arxiv:2205.12654",
"arxiv:2207.04672",
"region:us"
] |
2023-09-26T10:16:12+00:00
|
{"language": ["ind", "jav", "ace", "ban", "bjn", "min", "sun"], "tags": ["self-supervised-pretraining"]}
|
2023-09-26T11:31:56+00:00
|
[
"2205.12654",
"2207.04672"
] |
[
"ind",
"jav",
"ace",
"ban",
"bjn",
"min",
"sun"
] |
TAGS
#language-Indonesian #language-Javanese #language-Achinese #language-Balinese #language-Banjar #language-Minangkabau #language-Sundanese #self-supervised-pretraining #arxiv-2205.12654 #arxiv-2207.04672 #region-us
|
# kopi_nllb
KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
ODC_C
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# kopi_nllb\n\nKopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb\n\n\n\neach language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nODC_C",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Javanese #language-Achinese #language-Balinese #language-Banjar #language-Minangkabau #language-Sundanese #self-supervised-pretraining #arxiv-2205.12654 #arxiv-2207.04672 #region-us \n",
"# kopi_nllb\n\nKopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb\n\n\n\neach language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nODC_C",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
71,
97,
35,
6,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Javanese #language-Achinese #language-Balinese #language-Banjar #language-Minangkabau #language-Sundanese #self-supervised-pretraining #arxiv-2205.12654 #arxiv-2207.04672 #region-us \n# kopi_nllb\n\nKopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb\n\n\n\neach language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nODC_C## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
9d32dbd2afdb4f09697dd5babc543b98daaafdec
|
# id_panl_bppt
Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and
Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing
Capacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics
(Economy, International Affairs, Science & Technology, and Sports).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{id_panl_bppt,
author = {PAN Localization - BPPT},
title = {Parallel Text Corpora, English Indonesian},
year = {2009},
url = {http://digilib.bppt.go.id/sampul/p92-budiono.pdf},
}
```
## Homepage
[http://digilib.bppt.go.id/sampul/p92-budiono.pdf](http://digilib.bppt.go.id/sampul/p92-budiono.pdf)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/id_panl_bppt
|
[
"language:ind",
"machine-translation",
"region:us"
] |
2023-09-26T10:16:17+00:00
|
{"language": ["ind"], "tags": ["machine-translation"]}
|
2023-09-26T11:32:02+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #machine-translation #region-us
|
# id_panl_bppt
Parallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and
Application of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing
Capacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics
(Economy, International Affairs, Science & Technology, and Sports).
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# id_panl_bppt\n\nParallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and\n\nApplication of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing\n\nCapacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics\n\n(Economy, International Affairs, Science & Technology, and Sports).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #machine-translation #region-us \n",
"# id_panl_bppt\n\nParallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and\n\nApplication of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing\n\nCapacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics\n\n(Economy, International Affairs, Science & Technology, and Sports).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
16,
90,
35,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #machine-translation #region-us \n# id_panl_bppt\n\nParallel Text Corpora for Multi-Domain Translation System created by BPPT (Indonesian Agency for the Assessment and\n\nApplication of Technology) for PAN Localization Project (A Regional Initiative to Develop Local Language Computing\n\nCapacity in Asia). The dataset contains about 24K sentences in English and Bahasa Indonesia from 4 different topics\n\n(Economy, International Affairs, Science & Technology, and Sports).## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
1abb654f7ff5fb4cb5ad63691e6cb5fb634b8aa9
|
# inset_lexicon
InSet, an Indonesian sentiment lexicon built to identify written opinion and categorize it into positive or negative opinion,
which could be utilized to analyze public sentiment towards particular topic, event, or product. Composed using collection
of words from Indonesian tweet, InSet was constructed by manually weighting each words and enhanced by adding stemming and synonym set
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{inproceedings,
author = {Koto, Fajri and Rahmaningtyas, Gemala},
year = {2017},
month = {12},
pages = {},
title = {InSet Lexicon: Evaluation of a Word List for Indonesian Sentiment Analysis in Microblogs},
doi = {10.1109/IALP.2017.8300625}
}
```
## License
Unknown
## Homepage
[https://www.researchgate.net/publication/321757985_InSet_Lexicon_Evaluation_of_a_Word_List_for_Indonesian_Sentiment_Analysis_in_Microblogs](https://www.researchgate.net/publication/321757985_InSet_Lexicon_Evaluation_of_a_Word_List_for_Indonesian_Sentiment_Analysis_in_Microblogs)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/inset_lexicon
|
[
"language:ind",
"license:unknown",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:16:19+00:00
|
{"language": ["ind"], "license": "unknown", "tags": ["sentiment-analysis"]}
|
2023-09-26T11:32:05+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-unknown #sentiment-analysis #region-us
|
# inset_lexicon
InSet, an Indonesian sentiment lexicon built to identify written opinion and categorize it into positive or negative opinion,
which could be utilized to analyze public sentiment towards particular topic, event, or product. Composed using collection
of words from Indonesian tweet, InSet was constructed by manually weighting each words and enhanced by adding stemming and synonym set
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# inset_lexicon\n\nInSet, an Indonesian sentiment lexicon built to identify written opinion and categorize it into positive or negative opinion,\n\nwhich could be utilized to analyze public sentiment towards particular topic, event, or product. Composed using collection\n\nof words from Indonesian tweet, InSet was constructed by manually weighting each words and enhanced by adding stemming and synonym set",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-unknown #sentiment-analysis #region-us \n",
"# inset_lexicon\n\nInSet, an Indonesian sentiment lexicon built to identify written opinion and categorize it into positive or negative opinion,\n\nwhich could be utilized to analyze public sentiment towards particular topic, event, or product. Composed using collection\n\nof words from Indonesian tweet, InSet was constructed by manually weighting each words and enhanced by adding stemming and synonym set",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
24,
87,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-unknown #sentiment-analysis #region-us \n# inset_lexicon\n\nInSet, an Indonesian sentiment lexicon built to identify written opinion and categorize it into positive or negative opinion,\n\nwhich could be utilized to analyze public sentiment towards particular topic, event, or product. Composed using collection\n\nof words from Indonesian tweet, InSet was constructed by manually weighting each words and enhanced by adding stemming and synonym set## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
c78a163be1a9f22c890166cd35a7b7acb6156bda
|
# titml_idn
TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected to build a pioneering Indonesian Large Vocabulary Continuous Speech Recognition (LVCSR) System. In order to build an LVCSR system, high accurate acoustic models and large-scale language models are essential. Since Indonesian speech corpus was not available yet, we tried to collect speech data from 20 Indonesian native speakers (11 males and 9 females) to construct a speech corpus for training the acoustic model based on Hidden Markov Models (HMMs). A text corpus which was collected by ILPS, Informatics Institute, University of Amsterdam, was used to build a 40K-vocabulary dictionary and a n-gram language model.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{lestari2006titmlidn,
title={A large vocabulary continuous speech recognition system for Indonesian language},
author={Lestari, Dessi Puji and Iwano, Koji and Furui, Sadaoki},
booktitle={15th Indonesian Scientific Conference in Japan Proceedings},
pages={17--22},
year={2006}
}
```
## License
For research purposes only. If you use this corpus, you have to cite (Lestari et al, 2006).
## Homepage
[http://research.nii.ac.jp/src/en/TITML-IDN.html](http://research.nii.ac.jp/src/en/TITML-IDN.html)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/titml_idn
|
[
"language:ind",
"speech-recognition",
"region:us"
] |
2023-09-26T10:16:22+00:00
|
{"language": ["ind"], "tags": ["speech-recognition"]}
|
2023-09-26T11:32:09+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #speech-recognition #region-us
|
# titml_idn
TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected to build a pioneering Indonesian Large Vocabulary Continuous Speech Recognition (LVCSR) System. In order to build an LVCSR system, high accurate acoustic models and large-scale language models are essential. Since Indonesian speech corpus was not available yet, we tried to collect speech data from 20 Indonesian native speakers (11 males and 9 females) to construct a speech corpus for training the acoustic model based on Hidden Markov Models (HMMs). A text corpus which was collected by ILPS, Informatics Institute, University of Amsterdam, was used to build a 40K-vocabulary dictionary and a n-gram language model.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
For research purposes only. If you use this corpus, you have to cite (Lestari et al, 2006).
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# titml_idn\n\nTITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected to build a pioneering Indonesian Large Vocabulary Continuous Speech Recognition (LVCSR) System. In order to build an LVCSR system, high accurate acoustic models and large-scale language models are essential. Since Indonesian speech corpus was not available yet, we tried to collect speech data from 20 Indonesian native speakers (11 males and 9 females) to construct a speech corpus for training the acoustic model based on Hidden Markov Models (HMMs). A text corpus which was collected by ILPS, Informatics Institute, University of Amsterdam, was used to build a 40K-vocabulary dictionary and a n-gram language model.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nFor research purposes only. If you use this corpus, you have to cite (Lestari et al, 2006).",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #speech-recognition #region-us \n",
"# titml_idn\n\nTITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected to build a pioneering Indonesian Large Vocabulary Continuous Speech Recognition (LVCSR) System. In order to build an LVCSR system, high accurate acoustic models and large-scale language models are essential. Since Indonesian speech corpus was not available yet, we tried to collect speech data from 20 Indonesian native speakers (11 males and 9 females) to construct a speech corpus for training the acoustic model based on Hidden Markov Models (HMMs). A text corpus which was collected by ILPS, Informatics Institute, University of Amsterdam, was used to build a 40K-vocabulary dictionary and a n-gram language model.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nFor research purposes only. If you use this corpus, you have to cite (Lestari et al, 2006).",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
180,
35,
26,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #speech-recognition #region-us \n# titml_idn\n\nTITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected to build a pioneering Indonesian Large Vocabulary Continuous Speech Recognition (LVCSR) System. In order to build an LVCSR system, high accurate acoustic models and large-scale language models are essential. Since Indonesian speech corpus was not available yet, we tried to collect speech data from 20 Indonesian native speakers (11 males and 9 females) to construct a speech corpus for training the acoustic model based on Hidden Markov Models (HMMs). A text corpus which was collected by ILPS, Informatics Institute, University of Amsterdam, was used to build a 40K-vocabulary dictionary and a n-gram language model.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nFor research purposes only. If you use this corpus, you have to cite (Lestari et al, 2006).## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
b156530275f4a3933c9e68c442c07c1328f72eff
|
# posp
POSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.
The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.
POSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{hoesen2018investigating,
title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
author={Devin Hoesen and Ayu Purwarianti},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/posp
|
[
"language:ind",
"pos-tagging",
"region:us"
] |
2023-09-26T10:16:27+00:00
|
{"language": ["ind"], "tags": ["pos-tagging"]}
|
2023-09-26T11:32:13+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #pos-tagging #region-us
|
# posp
POSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.
The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.
POSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# posp\n\nPOSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.\n\nThe POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n\nPOSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #pos-tagging #region-us \n",
"# posp\n\nPOSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.\n\nThe POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n\nPOSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
16,
86,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #pos-tagging #region-us \n# posp\n\nPOSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.\n\nThe POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.\n\nPOSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Common Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
eaba0344b80f4f72c5d5e6de0e1a476ad4d58d7c
|
# nusax_senti
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/nusax/tree/main/datasets/sentiment](https://github.com/IndoNLP/nusax/tree/main/datasets/sentiment)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nusax_senti
|
[
"language:ind",
"language:ace",
"language:ban",
"language:bjn",
"language:bbc",
"language:bug",
"language:jav",
"language:mad",
"language:min",
"language:nij",
"language:sun",
"language:eng",
"sentiment-analysis",
"arxiv:2205.15960",
"region:us"
] |
2023-09-26T10:16:31+00:00
|
{"language": ["ind", "ace", "ban", "bjn", "bbc", "bug", "jav", "mad", "min", "nij", "sun", "eng"], "tags": ["sentiment-analysis"]}
|
2023-09-26T11:32:17+00:00
|
[
"2205.15960"
] |
[
"ind",
"ace",
"ban",
"bjn",
"bbc",
"bug",
"jav",
"mad",
"min",
"nij",
"sun",
"eng"
] |
TAGS
#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #sentiment-analysis #arxiv-2205.15960 #region-us
|
# nusax_senti
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nusax_senti\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #sentiment-analysis #arxiv-2205.15960 #region-us \n",
"# nusax_senti\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
88,
113,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #sentiment-analysis #arxiv-2205.15960 #region-us \n# nusax_senti\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-Senti is a 3-labels (positive, neutral, negative) sentiment analysis dataset for 10 Indonesian local languages + Indonesian and English.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
217d1dc5f8c3bde8d2cc56d3f7d344208b6aef2a
|
# barasa
The Barasa dataset is an Indonesian SentiWordNet for sentiment analysis.
For each term, the pair (POS,ID) uniquely identifies a WordNet (3.0) synset and there are PosScore and NegScore to show the positivity and negativity of the term.
The objectivity score can be calculated as: ObjScore = 1 - (PosScore + NegScore).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{baccianella-etal-2010-sentiwordnet,
title = "{S}enti{W}ord{N}et 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining",
author = "Baccianella, Stefano and
Esuli, Andrea and
Sebastiani, Fabrizio",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = "2010",
address = "Valletta, Malta",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2010/pdf/769_Paper.pdf",
abstract = "In this work we present SENTIWORDNET 3.0, a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications. SENTIWORDNET 3.0 is an improved version of SENTIWORDNET 1.0, a lexical resource publicly available for research purposes, now currently licensed to more than 300 research groups and used in a variety of research projects worldwide. Both SENTIWORDNET 1.0 and 3.0 are the result of automatically annotating all WORDNET synsets according to their degrees of positivity, negativity, and neutrality. SENTIWORDNET 1.0 and 3.0 differ (a) in the versions of WORDNET which they annotate (WORDNET 2.0 and 3.0, respectively), (b) in the algorithm used for automatically annotating WORDNET, which now includes (additionally to the previous semi-supervised learning step) a random-walk step for refining the scores. We here discuss SENTIWORDNET 3.0, especially focussing on the improvements concerning aspect (b) that it embodies with respect to version 1.0. We also report the results of evaluating SENTIWORDNET 3.0 against a fragment of WORDNET 3.0 manually annotated for positivity, negativity, and neutrality; these results indicate accuracy improvements of about 20{\%} with respect to SENTIWORDNET 1.0.",
}
@misc{moeljadi_2016,
title={Neocl/Barasa: Indonesian SentiWordNet},
url={https://github.com/neocl/barasa},
journal={GitHub},
author={Moeljadi, David},
year={2016}, month={Mar}
}
```
## License
MIT
## Homepage
[https://github.com/neocl/barasa](https://github.com/neocl/barasa)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/barasa
|
[
"language:ind",
"license:mit",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:16:41+00:00
|
{"language": ["ind"], "license": "mit", "tags": ["sentiment-analysis"]}
|
2023-09-26T11:32:24+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-mit #sentiment-analysis #region-us
|
# barasa
The Barasa dataset is an Indonesian SentiWordNet for sentiment analysis.
For each term, the pair (POS,ID) uniquely identifies a WordNet (3.0) synset and there are PosScore and NegScore to show the positivity and negativity of the term.
The objectivity score can be calculated as: ObjScore = 1 - (PosScore + NegScore).
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
MIT
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# barasa\n\nThe Barasa dataset is an Indonesian SentiWordNet for sentiment analysis.\n\nFor each term, the pair (POS,ID) uniquely identifies a WordNet (3.0) synset and there are PosScore and NegScore to show the positivity and negativity of the term.\n\nThe objectivity score can be calculated as: ObjScore = 1 - (PosScore + NegScore).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-mit #sentiment-analysis #region-us \n",
"# barasa\n\nThe Barasa dataset is an Indonesian SentiWordNet for sentiment analysis.\n\nFor each term, the pair (POS,ID) uniquely identifies a WordNet (3.0) synset and there are PosScore and NegScore to show the positivity and negativity of the term.\n\nThe objectivity score can be calculated as: ObjScore = 1 - (PosScore + NegScore).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
22,
95,
35,
3,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-mit #sentiment-analysis #region-us \n# barasa\n\nThe Barasa dataset is an Indonesian SentiWordNet for sentiment analysis.\n\nFor each term, the pair (POS,ID) uniquely identifies a WordNet (3.0) synset and there are PosScore and NegScore to show the positivity and negativity of the term.\n\nThe objectivity score can be calculated as: ObjScore = 1 - (PosScore + NegScore).## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nMIT## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
5213f19d2ca0a6a0684b79541c951ecfe1aafbde
|
# nusatranslation_emot
Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@unpublished{anonymous2023nusawrites:,
title={NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages},
author={Anonymous},
journal={OpenReview Preprint},
year={2023},
note={anonymous preprint under review}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/nusa-writes](https://github.com/IndoNLP/nusa-writes)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nusatranslation_emot
|
[
"language:abs",
"language:btk",
"language:bew",
"language:bug",
"language:jav",
"language:mad",
"language:mak",
"language:min",
"language:mui",
"language:rej",
"language:sun",
"emotion-classification",
"region:us"
] |
2023-09-26T10:16:49+00:00
|
{"language": ["abs", "btk", "bew", "bug", "jav", "mad", "mak", "min", "mui", "rej", "sun"], "tags": ["emotion-classification"]}
|
2023-09-26T11:32:34+00:00
|
[] |
[
"abs",
"btk",
"bew",
"bug",
"jav",
"mad",
"mak",
"min",
"mui",
"rej",
"sun"
] |
TAGS
#language-Ambonese Malay #language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #emotion-classification #region-us
|
# nusatranslation_emot
Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nusatranslation_emot\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\n We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\n For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Ambonese Malay #language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #emotion-classification #region-us \n",
"# nusatranslation_emot\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\n We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\n For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
75,
373,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Ambonese Malay #language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #emotion-classification #region-us \n# nusatranslation_emot\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\n We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\n For the rhetoric mode classification task, we cover 5 rhetoric modes, i.e., narrative, persuasive, argumentative, descriptive, and expository.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL"
] |
c07c51f1cb349c7f57a0d93224fad4acd7093855
|
# indolem_ud_id_pud
1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).
This dataset is part of [Parallel Universal Dependencies (PUD)](http://universaldependencies.org/conll17/) project.
This is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@conference{2f8c7438a7f44f6b85b773586cff54e8,
title = "A gold standard dependency treebank for Indonesian",
author = "Ika Alfina and Arawinda Dinakaramani and Fanany, {Mohamad Ivan} and Heru Suhartanto",
note = "Publisher Copyright: { extcopyright} 2019 Proceedings of the 33rd Pacific Asia Conference on Language, Information and Computation, PACLIC 2019. All rights reserved.; 33rd Pacific Asia Conference on Language, Information and Computation, PACLIC 2019 ; Conference date: 13-09-2019 Through 15-09-2019",
year = "2019",
month = jan,
day = "1",
language = "English",
pages = "1--9",
}
@article{DBLP:journals/corr/abs-2011-00677,
author = {Fajri Koto and
Afshin Rahimi and
Jey Han Lau and
Timothy Baldwin},
title = {IndoLEM and IndoBERT: {A} Benchmark Dataset and Pre-trained Language
Model for Indonesian {NLP}},
journal = {CoRR},
volume = {abs/2011.00677},
year = {2020},
url = {https://arxiv.org/abs/2011.00677},
eprinttype = {arXiv},
eprint = {2011.00677},
timestamp = {Fri, 06 Nov 2020 15:32:47 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2011-00677.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indolem_ud_id_pud
|
[
"language:ind",
"license:cc-by-4.0",
"dependency-parsing",
"arxiv:2011.00677",
"region:us"
] |
2023-09-26T10:17:04+00:00
|
{"language": ["ind"], "license": "cc-by-4.0", "tags": ["dependency-parsing"]}
|
2023-09-26T11:32:43+00:00
|
[
"2011.00677"
] |
[
"ind"
] |
TAGS
#language-Indonesian #license-cc-by-4.0 #dependency-parsing #arxiv-2011.00677 #region-us
|
# indolem_ud_id_pud
1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).
This dataset is part of Parallel Universal Dependencies (PUD) project.
This is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indolem_ud_id_pud\n\n1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).\n\nThis dataset is part of Parallel Universal Dependencies (PUD) project.\n\nThis is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-cc-by-4.0 #dependency-parsing #arxiv-2011.00677 #region-us \n",
"# indolem_ud_id_pud\n\n1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).\n\nThis dataset is part of Parallel Universal Dependencies (PUD) project.\n\nThis is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
35,
85,
35,
6,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-cc-by-4.0 #dependency-parsing #arxiv-2011.00677 #region-us \n# indolem_ud_id_pud\n\n1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).\n\nThis dataset is part of Parallel Universal Dependencies (PUD) project.\n\nThis is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
557f055d9adb271f894b59b22c81195acf4558ba
|
# parallel_id_nyo
Dataset that contains Indonesian - Lampung language pairs.
The original data should contains 3000 rows, unfortunately,
not all of the instances in the original data is aligned perfectly.
Thus, this data only have the aligned ones, which only contain 1727 pairs.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{Abidin_2021,
doi = {10.1088/1742-6596/1751/1/012036},
url = {https://dx.doi.org/10.1088/1742-6596/1751/1/012036},
year = {2021},
month = {jan},
publisher = {IOP Publishing},
volume = {1751},
number = {1},
pages = {012036},
author = {Z Abidin and Permata and I Ahmad and Rusliyawati},
title = {Effect of mono corpus quantity on statistical machine translation
Indonesian - Lampung dialect of nyo},
journal = {Journal of Physics: Conference Series},
abstract = {Lampung Province is located on the island of Sumatera. For the
immigrants in Lampung, they have difficulty in
communicating with the indigenous people of Lampung. As an alternative, both
immigrants and the indigenous people of Lampung speak Indonesian.
This research aims to build a language model from Indonesian language and a
translation model from the Lampung language dialect of nyo, both models will
be combined in a Moses decoder.
This research focuses on observing the effect of adding mono corpus to the
experimental statistical machine translation of
Indonesian - Lampung dialect of nyo.
This research uses 3000 pair parallel corpus in Indonesia language and
Lampung language dialect of nyo as source language
and uses 3000 mono corpus sentences in Lampung language
dialect of nyo as target language. The results showed that the accuracy
value in bilingual evalution under-study score when using 1000 sentences,
2000 sentences, 3000 sentences mono corpus
show the accuracy value of the bilingual evaluation under-study,
respectively, namely 40.97 %, 41.80 % and 45.26 %.}
}
```
## License
Unknown
## Homepage
[https://drive.google.com/drive/folders/1oNpybrq5OJ_4Ne0HS5w9eHqnZlZASpmC?usp=sharing](https://drive.google.com/drive/folders/1oNpybrq5OJ_4Ne0HS5w9eHqnZlZASpmC?usp=sharing)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/parallel_id_nyo
|
[
"language:ind",
"language:abl",
"license:unknown",
"machine-translation",
"region:us"
] |
2023-09-26T10:17:11+00:00
|
{"language": ["ind", "abl"], "license": "unknown", "tags": ["machine-translation"]}
|
2023-09-26T11:32:49+00:00
|
[] |
[
"ind",
"abl"
] |
TAGS
#language-Indonesian #language-Lampung Nyo #license-unknown #machine-translation #region-us
|
# parallel_id_nyo
Dataset that contains Indonesian - Lampung language pairs.
The original data should contains 3000 rows, unfortunately,
not all of the instances in the original data is aligned perfectly.
Thus, this data only have the aligned ones, which only contain 1727 pairs.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# parallel_id_nyo\n\nDataset that contains Indonesian - Lampung language pairs.\n\n\n\nThe original data should contains 3000 rows, unfortunately,\n\nnot all of the instances in the original data is aligned perfectly.\n\nThus, this data only have the aligned ones, which only contain 1727 pairs.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Lampung Nyo #license-unknown #machine-translation #region-us \n",
"# parallel_id_nyo\n\nDataset that contains Indonesian - Lampung language pairs.\n\n\n\nThe original data should contains 3000 rows, unfortunately,\n\nnot all of the instances in the original data is aligned perfectly.\n\nThus, this data only have the aligned ones, which only contain 1727 pairs.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
30,
70,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Lampung Nyo #license-unknown #machine-translation #region-us \n# parallel_id_nyo\n\nDataset that contains Indonesian - Lampung language pairs.\n\n\n\nThe original data should contains 3000 rows, unfortunately,\n\nnot all of the instances in the original data is aligned perfectly.\n\nThus, this data only have the aligned ones, which only contain 1727 pairs.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
b097c588182c36af5479bd7970adde4247736c44
|
# bible_en_id
Bible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898",
abstract = "Natural language generation (NLG) benchmarks provide an important avenue to measure progress and develop better NLG systems. Unfortunately, the lack of publicly available NLG benchmarks for low-resource languages poses a challenging barrier for building NLG systems that work well for languages with limited amounts of data. Here we introduce IndoNLG, the first benchmark to measure natural language generation (NLG) progress in three low-resource{---}yet widely spoken{---}languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems today. Concretely, IndoNLG covers six tasks: summarization, question answering, chit-chat, and three different pairs of machine translation (MT) tasks. We collate a clean pretraining corpus of Indonesian, Sundanese, and Javanese datasets, Indo4B-Plus, which is used to pretrain our models: IndoBART and IndoGPT. We show that IndoBART and IndoGPT achieve competitive performance on all tasks{---}despite using only one-fifth the parameters of a larger multilingual model, mBART-large (Liu et al., 2020). This finding emphasizes the importance of pretraining on closely related, localized languages to achieve more efficient learning and faster inference at very low-resource languages like Javanese and Sundanese.",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/bible_en_id
|
[
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] |
2023-09-26T10:17:14+00:00
|
{"language": ["ind", "eng"], "tags": ["machine-translation"]}
|
2023-09-26T11:32:53+00:00
|
[] |
[
"ind",
"eng"
] |
TAGS
#language-Indonesian #language-English #machine-translation #region-us
|
# bible_en_id
Bible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# bible_en_id\n\nBible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n",
"# bible_en_id\n\nBible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
20,
150,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n# bible_en_id\n\nBible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
ba1d3cbc4dd93acf67ede52abe0d9cdfb3daf441
|
# wikiann
The wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles
annotated with LOC (location), PER (person), and ORG (organisation)
tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of
Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus
Language WikiAnn ISO 639-3
Indonesian id ind
Javanese jv jav
Minangkabau min min
Sundanese su sun
Acehnese ace ace
Malay ms mly
Banyumasan map-bms map-bms
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P17-1178",
doi = "10.18653/v1/P17-1178",
pages = "1946--1958",
abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework
for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able
to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to
an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of
new KB mining methods: generating {``}silver-standard{''} annotations by
transferring annotations from English to other languages through cross-lingual links and KB properties,
refining annotations through self-training and topic selection,
deriving language-specific morphology features from anchor links, and mining word translation pairs from
cross-lingual links. Both name tagging and linking results for 282 languages are promising
on Wikipedia data and on-Wikipedia data.",
}
@inproceedings{rahimi-etal-2019-massively,
title = "Massively Multilingual Transfer for {NER}",
author = "Rahimi, Afshin and
Li, Yuan and
Cohn, Trevor",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1015",
pages = "151--164",
}
```
## License
Apache-2.0 license
## Homepage
[https://github.com/afshinrahimi/mmner](https://github.com/afshinrahimi/mmner)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/wikiann
|
[
"language:ind",
"language:eng",
"language:jav",
"language:min",
"language:sun",
"language:ace",
"language:mly",
"named-entity-recognition",
"region:us"
] |
2023-09-26T10:17:18+00:00
|
{"language": ["ind", "eng", "jav", "min", "sun", "ace", "mly"], "tags": ["named-entity-recognition"]}
|
2023-09-26T11:32:59+00:00
|
[] |
[
"ind",
"eng",
"jav",
"min",
"sun",
"ace",
"mly"
] |
TAGS
#language-Indonesian #language-English #language-Javanese #language-Minangkabau #language-Sundanese #language-Achinese #language-Malay (individual language) #named-entity-recognition #region-us
|
# wikiann
The wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.
WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles
annotated with LOC (location), PER (person), and ORG (organisation)
tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of
Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus
Language WikiAnn ISO 639-3
Indonesian id ind
Javanese jv jav
Minangkabau min min
Sundanese su sun
Acehnese ace ace
Malay ms mly
Banyumasan map-bms map-bms
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Apache-2.0 license
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# wikiann\n\nThe wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.\n\nWikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles\n\n annotated with LOC (location), PER (person), and ORG (organisation)\n\n tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of\n\n Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus\n\n\n\nLanguage\tWikiAnn\tISO 639-3\n\nIndonesian\tid\tind\n\nJavanese\tjv\tjav\n\nMinangkabau\tmin\tmin\n\nSundanese\tsu\tsun\n\nAcehnese\tace\tace\n\nMalay\tms\tmly\n\nBanyumasan\tmap-bms\tmap-bms",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nApache-2.0 license",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-English #language-Javanese #language-Minangkabau #language-Sundanese #language-Achinese #language-Malay (individual language) #named-entity-recognition #region-us \n",
"# wikiann\n\nThe wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.\n\nWikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles\n\n annotated with LOC (location), PER (person), and ORG (organisation)\n\n tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of\n\n Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus\n\n\n\nLanguage\tWikiAnn\tISO 639-3\n\nIndonesian\tid\tind\n\nJavanese\tjv\tjav\n\nMinangkabau\tmin\tmin\n\nSundanese\tsu\tsun\n\nAcehnese\tace\tace\n\nMalay\tms\tmly\n\nBanyumasan\tmap-bms\tmap-bms",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nApache-2.0 license",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
58,
202,
35,
7,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-English #language-Javanese #language-Minangkabau #language-Sundanese #language-Achinese #language-Malay (individual language) #named-entity-recognition #region-us \n# wikiann\n\nThe wikiann dataset contains NER tags with labels from O (0), B-PER (1), I-PER (2), B-ORG (3), I-ORG (4), B-LOC (5), I-LOC (6). The Indonesian subset is used.\n\nWikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles\n\n annotated with LOC (location), PER (person), and ORG (organisation)\n\n tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of\n\n Rahimi et al. (2019), and uses the following subsets from the original WikiANN corpus\n\n\n\nLanguage\tWikiAnn\tISO 639-3\n\nIndonesian\tid\tind\n\nJavanese\tjv\tjav\n\nMinangkabau\tmin\tmin\n\nSundanese\tsu\tsun\n\nAcehnese\tace\tace\n\nMalay\tms\tmly\n\nBanyumasan\tmap-bms\tmap-bms## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nApache-2.0 license## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
d0218b4ce51e265291b37432c7d26876bc95e164
|
# news_en_id
News En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{guntara-etal-2020-benchmarking,
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
author = "Guntara, Tri Wahyu and
Aji, Alham Fikri and
Prasojo, Radityo Eko",
booktitle = "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.bucc-1.6",
pages = "35--43",
language = "English",
ISBN = "979-10-95546-42-9",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/gunnxx/indonesian-mt-data](https://github.com/gunnxx/indonesian-mt-data)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/news_en_id
|
[
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] |
2023-09-26T10:17:22+00:00
|
{"language": ["ind", "eng"], "tags": ["machine-translation"]}
|
2023-09-26T11:33:03+00:00
|
[] |
[
"ind",
"eng"
] |
TAGS
#language-Indonesian #language-English #machine-translation #region-us
|
# news_en_id
News En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# news_en_id\n\nNews En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n",
"# news_en_id\n\nNews En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
20,
138,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n# news_en_id\n\nNews En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the news. The news dataset is collected from multiple sources: Pan Asia Networking Localization (PANL), Bilingual BBC news articles, Berita Jakarta, and GlobalVoices. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
9c3403dedc07197b35f03d26a7960cbc7c8c5335
|
# unimorph_id
The UniMorph project, Indonesian chapter.
Due to sparsity of UniMorph original parsing, raw source is used instead.
Original parsing can be found on https://huggingface.co/datasets/universal_morphologies/blob/2.3.2/universal_morphologies.py
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{pimentel-ryskina-etal-2021-sigmorphon,
title = "SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages",
author = "Pimentel, Tiago and
Ryskina, Maria and
Mielke, Sabrina J. and
Wu, Shijie and
Chodroff, Eleanor and
Leonard, Brian and
Nicolai, Garrett and
Ghanggo Ate, Yustinus and
Khalifa, Salam and
Habash, Nizar and
El-Khaissi, Charbel and
Goldman, Omer and
Gasser, Michael and
Lane, William and
Coler, Matt and
Oncevay, Arturo and
Montoya Samame, Jaime Rafael and
Silva Villegas, Gema Celeste and
Ek, Adam and
Bernardy, Jean-Philippe and
Shcherbakov, Andrey and
Bayyr-ool, Aziyana and
Sheifer, Karina and
Ganieva, Sofya and
Plugaryov, Matvey and
Klyachko, Elena and
Salehi, Ali and
Krizhanovsky, Andrew and
Krizhanovsky, Natalia and
Vania, Clara and
Ivanova, Sardana and
Salchak, Aelita and
Straughn, Christopher and
Liu, Zoey and
Washington, Jonathan North and
Ataman, Duygu and
Kiera{'s}, Witold and
Woli{'n}ski, Marcin and
Suhardijanto, Totok and
Stoehr, Niklas and
Nuriah, Zahroh and
Ratan, Shyam and
Tyers, Francis M. and
Ponti, Edoardo M. and
Aiton, Grant and
Hatcher, Richard J. and
Prud'hommeaux, Emily and
Kumar, Ritesh and
Hulden, Mans and
Barta, Botond and
Lakatos, Dorina and
Szolnok, G{'a}bor and
{'A}cs, Judit and
Raj, Mohit and
Yarowsky, David and
Cotterell, Ryan and
Ambridge, Ben and
Vylomova, Ekaterina",
booktitle = "Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.sigmorphon-1.25",
doi = "10.18653/v1/2021.sigmorphon-1.25",
pages = "229--259"
}
```
## License
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
## Homepage
[https://github.com/unimorph/ind](https://github.com/unimorph/ind)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/unimorph_id
|
[
"language:ind",
"morphological-inflection",
"region:us"
] |
2023-09-26T10:17:31+00:00
|
{"language": ["ind"], "tags": ["morphological-inflection"]}
|
2023-09-26T11:33:11+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #morphological-inflection #region-us
|
# unimorph_id
The UniMorph project, Indonesian chapter.
Due to sparsity of UniMorph original parsing, raw source is used instead.
Original parsing can be found on URL
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# unimorph_id\n\nThe UniMorph project, Indonesian chapter.\n\nDue to sparsity of UniMorph original parsing, raw source is used instead.\n\nOriginal parsing can be found on URL",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #morphological-inflection #region-us \n",
"# unimorph_id\n\nThe UniMorph project, Indonesian chapter.\n\nDue to sparsity of UniMorph original parsing, raw source is used instead.\n\nOriginal parsing can be found on URL",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
43,
35,
16,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #morphological-inflection #region-us \n# unimorph_id\n\nThe UniMorph project, Indonesian chapter.\n\nDue to sparsity of UniMorph original parsing, raw source is used instead.\n\nOriginal parsing can be found on URL## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
39c01bd904852b31849c89337217ff8aae19efb8
|
# indspeech_digit_cdsr
INDspeech_DIGIT_CDSR is the first Indonesian speech dataset for connected digit speech recognition (CDSR). The data was developed by TELKOMRisTI (R&D Division, PT Telekomunikasi Indonesia) in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan and Bandung Institute of Technology (ITB) under the Asia-Pacific Telecommunity (APT) project in 2004 [Sakti et al., 2004]. Although it was originally developed for a telecommunication system for hearing and speaking impaired people, it can be used for other applications, i.e., automatic call centers that recognize telephone numbers.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{sakti-icslp-2004,
title = "Indonesian Speech Recognition for Hearing and Speaking Impaired People",
author = "Sakti, Sakriani and Hutagaol, Paulus and Arman, Arry Akhmad and Nakamura, Satoshi",
booktitle = "Proc. International Conference on Spoken Language Processing (INTERSPEECH - ICSLP)",
year = "2004",
pages = "1037--1040"
address = "Jeju Island, Korea"
}
```
## License
CC-BY-NC-SA-4.0
## Homepage
[https://github.com/s-sakti/data_indsp_digit_cdsr](https://github.com/s-sakti/data_indsp_digit_cdsr)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indspeech_digit_cdsr
|
[
"language:ind",
"speech-recognition",
"region:us"
] |
2023-09-26T10:17:36+00:00
|
{"language": ["ind"], "tags": ["speech-recognition"]}
|
2023-09-26T11:33:16+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #speech-recognition #region-us
|
# indspeech_digit_cdsr
INDspeech_DIGIT_CDSR is the first Indonesian speech dataset for connected digit speech recognition (CDSR). The data was developed by TELKOMRisTI (R&D Division, PT Telekomunikasi Indonesia) in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan and Bandung Institute of Technology (ITB) under the Asia-Pacific Telecommunity (APT) project in 2004 [Sakti et al., 2004]. Although it was originally developed for a telecommunication system for hearing and speaking impaired people, it can be used for other applications, i.e., automatic call centers that recognize telephone numbers.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC-BY-NC-SA-4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indspeech_digit_cdsr\n\nINDspeech_DIGIT_CDSR is the first Indonesian speech dataset for connected digit speech recognition (CDSR). The data was developed by TELKOMRisTI (R&D Division, PT Telekomunikasi Indonesia) in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan and Bandung Institute of Technology (ITB) under the Asia-Pacific Telecommunity (APT) project in 2004 [Sakti et al., 2004]. Although it was originally developed for a telecommunication system for hearing and speaking impaired people, it can be used for other applications, i.e., automatic call centers that recognize telephone numbers.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-NC-SA-4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #speech-recognition #region-us \n",
"# indspeech_digit_cdsr\n\nINDspeech_DIGIT_CDSR is the first Indonesian speech dataset for connected digit speech recognition (CDSR). The data was developed by TELKOMRisTI (R&D Division, PT Telekomunikasi Indonesia) in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan and Bandung Institute of Technology (ITB) under the Asia-Pacific Telecommunity (APT) project in 2004 [Sakti et al., 2004]. Although it was originally developed for a telecommunication system for hearing and speaking impaired people, it can be used for other applications, i.e., automatic call centers that recognize telephone numbers.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-NC-SA-4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
152,
35,
11,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #speech-recognition #region-us \n# indspeech_digit_cdsr\n\nINDspeech_DIGIT_CDSR is the first Indonesian speech dataset for connected digit speech recognition (CDSR). The data was developed by TELKOMRisTI (R&D Division, PT Telekomunikasi Indonesia) in collaboration with Advanced Telecommunication Research Institute International (ATR) Japan and Bandung Institute of Technology (ITB) under the Asia-Pacific Telecommunity (APT) project in 2004 [Sakti et al., 2004]. Although it was originally developed for a telecommunication system for hearing and speaking impaired people, it can be used for other applications, i.e., automatic call centers that recognize telephone numbers.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC-BY-NC-SA-4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
3a88ae129d048e482cdf9b4209ef26ea0d85188a
|
# indo_religious_mt_en_id
Indonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as "salat" or "shalat", or repentance as "tobat" or "taubat".
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{guntara-etal-2020-benchmarking,
title = "Benchmarking Multidomain {E}nglish-{I}ndonesian Machine Translation",
author = "Guntara, Tri Wahyu and
Aji, Alham Fikri and
Prasojo, Radityo Eko",
booktitle = "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.bucc-1.6",
pages = "35--43",
abstract = "In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic. In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and conversation, to train and benchmark some variants of transformer-based NMT models across the domains. We show using BLEU that our models perform well across them , outperform the baseline Statistical Machine Translation (SMT) models, and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data.",
language = "English",
ISBN = "979-10-95546-42-9",
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/gunnxx/indonesian-mt-data/tree/master/religious](https://github.com/gunnxx/indonesian-mt-data/tree/master/religious)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indo_religious_mt_en_id
|
[
"language:ind",
"language:eng",
"machine-translation",
"region:us"
] |
2023-09-26T10:17:41+00:00
|
{"language": ["ind", "eng"], "tags": ["machine-translation"]}
|
2023-09-26T11:33:20+00:00
|
[] |
[
"ind",
"eng"
] |
TAGS
#language-Indonesian #language-English #machine-translation #region-us
|
# indo_religious_mt_en_id
Indonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as "salat" or "shalat", or repentance as "tobat" or "taubat".
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indo_religious_mt_en_id\n\nIndonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as \"salat\" or \"shalat\", or repentance as \"tobat\" or \"taubat\".",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n",
"# indo_religious_mt_en_id\n\nIndonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as \"salat\" or \"shalat\", or repentance as \"tobat\" or \"taubat\".",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
20,
210,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-English #machine-translation #region-us \n# indo_religious_mt_en_id\n\nIndonesian Religious Domain MT En-Id consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. An interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Some inconsistencies in the transliteration are also found, for example praying is sometimes written as \"salat\" or \"shalat\", or repentance as \"tobat\" or \"taubat\".## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
49003b4e5130b0f21a45f270f1ed9723cd7bfa1d
|
# nusaparagraph_topic
Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
For the topic modeling task, we cover 8 topics, i.e., food \& beverages, sports, leisure, religion, culture \& heritage, a slice of life, technology, and business.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@unpublished{anonymous2023nusawrites:,
title={NusaWrites: Constructing High-Quality Corpora for Underrepresented and Extremely Low-Resource Languages},
author={Anonymous},
journal={OpenReview Preprint},
year={2023},
note={anonymous preprint under review}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/nusa-writes](https://github.com/IndoNLP/nusa-writes)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nusaparagraph_topic
|
[
"language:btk",
"language:bew",
"language:bug",
"language:jav",
"language:mad",
"language:mak",
"language:min",
"language:mui",
"language:rej",
"language:sun",
"topic-modeling",
"region:us"
] |
2023-09-26T10:17:48+00:00
|
{"language": ["btk", "bew", "bug", "jav", "mad", "mak", "min", "mui", "rej", "sun"], "tags": ["topic-modeling"]}
|
2023-09-26T11:33:27+00:00
|
[] |
[
"btk",
"bew",
"bug",
"jav",
"mad",
"mak",
"min",
"mui",
"rej",
"sun"
] |
TAGS
#language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #topic-modeling #region-us
|
# nusaparagraph_topic
Democratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.
We introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).
For the topic modeling task, we cover 8 topics, i.e., food \& beverages, sports, leisure, religion, culture \& heritage, a slice of life, technology, and business.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nusaparagraph_topic\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\nWe introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\nFor the topic modeling task, we cover 8 topics, i.e., food \\& beverages, sports, leisure, religion, culture \\& heritage, a slice of life, technology, and business.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #topic-modeling #region-us \n",
"# nusaparagraph_topic\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\nWe introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\nFor the topic modeling task, we cover 8 topics, i.e., food \\& beverages, sports, leisure, religion, culture \\& heritage, a slice of life, technology, and business.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
66,
382,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-btk #language-Betawi #language-Buginese #language-Javanese #language-Madurese #language-Makasar #language-Minangkabau #language-Musi #language-Rejang #language-Sundanese #topic-modeling #region-us \n# nusaparagraph_topic\n\nDemocratizing access to natural language processing (NLP) technology is crucial, especially for underrepresented and extremely low-resource languages. Previous research has focused on developing labeled and unlabeled corpora for these languages through online scraping and document translation. While these methods have proven effective and cost-efficient, we have identified limitations in the resulting corpora, including a lack of lexical diversity and cultural relevance to local communities. To address this gap, we conduct a case study on Indonesian local languages. We compare the effectiveness of online scraping, human translation, and paragraph writing by native speakers in constructing datasets. Our findings demonstrate that datasets generated through paragraph writing by native speakers exhibit superior quality in terms of lexical diversity and cultural content. In addition, we present the NusaWrites benchmark, encompassing 12 underrepresented and extremely low-resource languages spoken by millions of individuals in Indonesia. Our empirical experiment results using existing multilingual large language models conclude the need to extend these models to more underrepresented languages.\n\nWe introduce a novel high quality human curated corpora, i.e., NusaMenulis, which covers 12 languages spoken in Indonesia. The resource extend the coverage of languages to 5 new languages, i.e., Ambon (abs), Bima (bhp), Makassarese (mak), Palembang / Musi (mui), and Rejang (rej).\n\nFor the topic modeling task, we cover 8 topics, i.e., food \\& beverages, sports, leisure, religion, culture \\& heritage, a slice of life, technology, and business.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL"
] |
753eb15978a7c0518b611933e21540d98cc2a0c5
|
# id_clickbait
The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news
publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,
Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)
15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.
Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated
sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{WILLIAM2020106231,
title = "CLICK-ID: A novel dataset for Indonesian clickbait headlines",
journal = "Data in Brief",
volume = "32",
pages = "106231",
year = "2020",
issn = "2352-3409",
doi = "https://doi.org/10.1016/j.dib.2020.106231",
url = "http://www.sciencedirect.com/science/article/pii/S2352340920311252",
author = "Andika William and Yunita Sari",
keywords = "Indonesian, Natural Language Processing, News articles, Clickbait, Text-classification",
abstract = "News analysis is a popular task in Natural Language Processing (NLP). In particular, the problem of clickbait in news analysis has gained attention in recent years [1, 2]. However, the majority of the tasks has been focused on English news, in which there is already a rich representative resource. For other languages, such as Indonesian, there is still a lack of resource for clickbait tasks. Therefore, we introduce the CLICK-ID dataset of Indonesian news headlines extracted from 12 Indonesian online news publishers. It is comprised of 15,000 annotated headlines with clickbait and non-clickbait labels. Using the CLICK-ID dataset, we then developed an Indonesian clickbait classification model achieving favourable performance. We believe that this corpus will be useful for replicable experiments in clickbait detection or other experiments in NLP areas."
}
```
## License
Creative Commons Attribution 4.0 International
## Homepage
[https://www.sciencedirect.com/science/article/pii/S2352340920311252#!](https://www.sciencedirect.com/science/article/pii/S2352340920311252#!)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/id_clickbait
|
[
"language:ind",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:17:57+00:00
|
{"language": ["ind"], "tags": ["sentiment-analysis"]}
|
2023-09-26T11:33:36+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #sentiment-analysis #region-us
|
# id_clickbait
The CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news
publishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,
Tribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)
15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.
Judgment were based only on the headline. The majority then is considered as the ground truth. In the annotated
sample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# id_clickbait\n\nThe CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news\n\npublishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,\n\nTribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)\n\n15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.\n\nJudgment were based only on the headline. The majority then is considered as the ground truth. In the annotated\n\nsample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #sentiment-analysis #region-us \n",
"# id_clickbait\n\nThe CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news\n\npublishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,\n\nTribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)\n\n15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.\n\nJudgment were based only on the headline. The majority then is considered as the ground truth. In the annotated\n\nsample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
176,
35,
7,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #sentiment-analysis #region-us \n# id_clickbait\n\nThe CLICK-ID dataset is a collection of Indonesian news headlines that was collected from 12 local online news\n\npublishers; detikNews, Fimela, Kapanlagi, Kompas, Liputan6, Okezone, Posmetro-Medan, Republika, Sindonews, Tempo,\n\nTribunnews, and Wowkeren. This dataset is comprised of mainly two parts; (i) 46,119 raw article data, and (ii)\n\n15,000 clickbait annotated sample headlines. Annotation was conducted with 3 annotator examining each headline.\n\nJudgment were based only on the headline. The majority then is considered as the ground truth. In the annotated\n\nsample, our annotation shows 6,290 clickbait and 8,710 non-clickbait.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
dafe6a4c0e94236e5b98821333042f339c9cc93b
|
# facqa
FacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.
Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the
corresponding short passage. There are six categories of questions: date, location, name,
organization, person, and quantitative.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{purwarianti2007machine,
title={A Machine Learning Approach for Indonesian Question Answering System},
author={Ayu Purwarianti, Masatoshi Tsuchiya, and Seiichi Nakagawa},
booktitle={Proceedings of Artificial Intelligence and Applications },
pages={573--578},
year={2007}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/facqa
|
[
"language:ind",
"question-answering",
"region:us"
] |
2023-09-26T10:18:01+00:00
|
{"language": ["ind"], "tags": ["question-answering"]}
|
2023-09-26T11:33:40+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #question-answering #region-us
|
# facqa
FacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.
Each row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the
corresponding short passage. There are six categories of questions: date, location, name,
organization, person, and quantitative.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC-BY-SA 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# facqa\n\nFacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.\n\nEach row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the\n\ncorresponding short passage. There are six categories of questions: date, location, name,\n\norganization, person, and quantitative.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #question-answering #region-us \n",
"# facqa\n\nFacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.\n\nEach row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the\n\ncorresponding short passage. There are six categories of questions: date, location, name,\n\norganization, person, and quantitative.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
89,
35,
8,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #question-answering #region-us \n# facqa\n\nFacQA: The goal of the FacQA dataset is to find the answer to a question from a provided short passage from a news article.\n\nEach row in the FacQA dataset consists of a question, a short passage, and a label phrase, which can be found inside the\n\ncorresponding short passage. There are six categories of questions: date, location, name,\n\norganization, person, and quantitative.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC-BY-SA 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
70a736e55ab6c805e5bdc2460848a95d80e9a8b9
|
# indolem_tweet_ordering
IndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.
This task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.
Train: 4327 threads
Development: 760 threads
Test: 1521 threads
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{DBLP:journals/corr/abs-2011-00677,
author = {Fajri Koto and
Afshin Rahimi and
Jey Han Lau and
Timothy Baldwin},
title = {IndoLEM and IndoBERT: {A} Benchmark Dataset and Pre-trained Language
Model for Indonesian {NLP}},
journal = {CoRR},
volume = {abs/2011.00677},
year = {2020},
url = {https://arxiv.org/abs/2011.00677},
eprinttype = {arXiv},
eprint = {2011.00677},
timestamp = {Fri, 06 Nov 2020 15:32:47 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2011-00677.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indolem_tweet_ordering
|
[
"language:ind",
"license:cc-by-4.0",
"sentence-ordering",
"arxiv:2011.00677",
"region:us"
] |
2023-09-26T10:18:05+00:00
|
{"language": ["ind"], "license": "cc-by-4.0", "tags": ["sentence-ordering"]}
|
2023-09-26T11:34:03+00:00
|
[
"2011.00677"
] |
[
"ind"
] |
TAGS
#language-Indonesian #license-cc-by-4.0 #sentence-ordering #arxiv-2011.00677 #region-us
|
# indolem_tweet_ordering
IndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.
This task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.
Train: 4327 threads
Development: 760 threads
Test: 1521 threads
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indolem_tweet_ordering\n\nIndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.\n\nThis task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.\n\n\n\nTrain: 4327 threads\n\nDevelopment: 760 threads\n\nTest: 1521 threads",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-cc-by-4.0 #sentence-ordering #arxiv-2011.00677 #region-us \n",
"# indolem_tweet_ordering\n\nIndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.\n\nThis task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.\n\n\n\nTrain: 4327 threads\n\nDevelopment: 760 threads\n\nTest: 1521 threads",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
34,
161,
35,
6,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-cc-by-4.0 #sentence-ordering #arxiv-2011.00677 #region-us \n# indolem_tweet_ordering\n\nIndoLEM (Indonesian Language Evaluation Montage) is a comprehensive Indonesian benchmark that comprises of seven tasks for the Indonesian language. This benchmark is categorized into three pillars of NLP tasks: morpho-syntax, semantics, and discourse.\n\nThis task is based on the sentence ordering task of Barzilay and Lapata (2008) to assess text relatedness. We construct the data by shuffling Twitter threads (containing 3 to 5 tweets), and assessing the predicted ordering in terms of rank correlation (p) with the original. The experiment is based on 5-fold cross validation.\n\n\n\nTrain: 4327 threads\n\nDevelopment: 760 threads\n\nTest: 1521 threads## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
6cce9388ba73129b171b67ee977e2e0fc35944cd
|
# INDspeech_NEWS_TTS
INDspeech_NEWS_TTS is a speech dataset for developing an Indonesian text-to-speech synthesis system. The data was developed by Advanced Telecommunication Research Institute International (ATR) Japan under the the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{sakti-tts-cocosda-2008,
title = "Development of HMM-based Indonesian Speech Synthesis",
author = "Sakti, Sakriani and Maia, Ranniery and Sakai, Shinsuke and Nakamura, Satoshi",
booktitle = "Proc. Oriental COCOSDA",
year = "2008",
pages = "215--220"
address = "Kyoto, Japan"
}
@inproceedings{sakti-tts-malindo-2010,
title = "Quality and Intelligibility Assessment of Indonesian HMM-Based Speech Synthesis System",
author = "Sakti, Sakriani and Sakai, Shinsuke and Isotani, Ryosuke and Kawai, Hisashi and Nakamura, Satoshi",
booktitle = "Proc. MALINDO",
year = "2010",
pages = "51--57"
address = "Jakarta, Indonesia"
}
@article{sakti-s2st-csl-2013,
title = "{A-STAR}: Toward Tranlating Asian Spoken Languages",
author = "Sakti, Sakriani and Paul, Michael and Finch, Andrew and Sakai, Shinsuke and Thang, Tat Vu, and Kimura, Noriyuki
and Hori, Chiori and Sumita, Eiichiro and Nakamura, Satoshi and Park, Jun and Wutiwiwatchai, Chai and Xu, Bo and Riza, Hammam
and Arora, Karunesh and Luong, Chi Mai and Li, Haizhou",
journal = "Special issue on Speech-to-Speech Translation, Computer Speech and Language Journal",
volume = "27",
number ="2",
pages = "509--527",
year = "2013",
publisher = "Elsevier"
}
```
## License
CC-BY-NC-SA 4.0
## Homepage
[https://github.com/s-sakti/data_indsp_news_tts](https://github.com/s-sakti/data_indsp_news_tts)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indspeech_news_tts
|
[
"language:ind",
"text-to-speech",
"region:us"
] |
2023-09-26T10:18:05+00:00
|
{"language": ["ind"], "tags": ["text-to-speech"]}
|
2023-09-26T11:34:11+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #text-to-speech #region-us
|
# INDspeech_NEWS_TTS
INDspeech_NEWS_TTS is a speech dataset for developing an Indonesian text-to-speech synthesis system. The data was developed by Advanced Telecommunication Research Institute International (ATR) Japan under the the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC-BY-NC-SA 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# INDspeech_NEWS_TTS\n\nINDspeech_NEWS_TTS is a speech dataset for developing an Indonesian text-to-speech synthesis system. The data was developed by Advanced Telecommunication Research Institute International (ATR) Japan under the the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-NC-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #text-to-speech #region-us \n",
"# INDspeech_NEWS_TTS\n\nINDspeech_NEWS_TTS is a speech dataset for developing an Indonesian text-to-speech synthesis system. The data was developed by Advanced Telecommunication Research Institute International (ATR) Japan under the the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-NC-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
78,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #text-to-speech #region-us \n# INDspeech_NEWS_TTS\n\nINDspeech_NEWS_TTS is a speech dataset for developing an Indonesian text-to-speech synthesis system. The data was developed by Advanced Telecommunication Research Institute International (ATR) Japan under the the Asian speech translation advanced research (A-STAR) project [Sakti et al., 2013].## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC-BY-NC-SA 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
e8c93d801b00fc6ace7ce3326495881bdbb8c03f
|
# nusax_mt
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{winata2022nusax,
title={NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages},
author={Winata, Genta Indra and Aji, Alham Fikri and Cahyawijaya,
Samuel and Mahendra, Rahmad and Koto, Fajri and Romadhony,
Ade and Kurniawan, Kemal and Moeljadi, David and Prasojo,
Radityo Eko and Fung, Pascale and Baldwin, Timothy and Lau,
Jey Han and Sennrich, Rico and Ruder, Sebastian},
year={2022},
eprint={2205.15960},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/nusax/tree/main/datasets/mt](https://github.com/IndoNLP/nusax/tree/main/datasets/mt)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nusax_mt
|
[
"language:ind",
"language:ace",
"language:ban",
"language:bjn",
"language:bbc",
"language:bug",
"language:jav",
"language:mad",
"language:min",
"language:nij",
"language:sun",
"language:eng",
"machine-translation",
"arxiv:2205.15960",
"region:us"
] |
2023-09-26T10:18:05+00:00
|
{"language": ["ind", "ace", "ban", "bjn", "bbc", "bug", "jav", "mad", "min", "nij", "sun", "eng"], "tags": ["machine-translation"]}
|
2023-09-26T11:34:19+00:00
|
[
"2205.15960"
] |
[
"ind",
"ace",
"ban",
"bjn",
"bbc",
"bug",
"jav",
"mad",
"min",
"nij",
"sun",
"eng"
] |
TAGS
#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #machine-translation #arxiv-2205.15960 #region-us
|
# nusax_mt
NusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.
NusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nusax_mt\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #machine-translation #arxiv-2205.15960 #region-us \n",
"# nusax_mt\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
87,
131,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Achinese #language-Balinese #language-Banjar #language-Batak Toba #language-Buginese #language-Javanese #language-Madurese #language-Minangkabau #language-Ngaju #language-Sundanese #language-English #machine-translation #arxiv-2205.15960 #region-us \n# nusax_mt\n\nNusaX is a high-quality multilingual parallel corpus that covers 12 languages, Indonesian, English, and 10 Indonesian local languages, namely Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Ngaju, Sundanese, and Toba Batak.\n\n\n\nNusaX-MT is a parallel corpus for training and benchmarking machine translation models across 10 Indonesian local languages + Indonesian and English. The data is presented in csv format with 12 columns, one column for each language.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
c7cbdb4005649662032420d9241cb19137ec40e9
|
# xcopa
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around
the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the
creation of XCOPA and the implementation of the baselines are available in the paper.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{ponti2020xcopa,
title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning},
author={Edoardo M. Ponti, Goran Glava{s}, Olga Majewska, Qianchu Liu, Ivan Vuli'{c} and Anna Korhonen},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2020},
url={https://ducdauge.github.io/files/xcopa.pdf}
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
## License
Unknown
## Homepage
[https://github.com/cambridgeltl/xcopa](https://github.com/cambridgeltl/xcopa)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/xcopa
|
[
"language:ind",
"license:unknown",
"question-answering",
"region:us"
] |
2023-09-26T10:18:06+00:00
|
{"language": ["ind"], "license": "unknown", "tags": ["question-answering"]}
|
2023-09-26T11:34:53+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-unknown #question-answering #region-us
|
# xcopa
XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning
The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across
languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around
the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the
creation of XCOPA and the implementation of the baselines are available in the paper.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# xcopa\n\nXCOPA: A Multilingual Dataset for Causal Commonsense Reasoning\n\nThe Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across\n\nlanguages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around\n\nthe globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the\n\ncreation of XCOPA and the implementation of the baselines are available in the paper.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-unknown #question-answering #region-us \n",
"# xcopa\n\nXCOPA: A Multilingual Dataset for Causal Commonsense Reasoning\n\nThe Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across\n\nlanguages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around\n\nthe globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the\n\ncreation of XCOPA and the implementation of the baselines are available in the paper.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
24,
145,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-unknown #question-answering #region-us \n# xcopa\n\nXCOPA: A Multilingual Dataset for Causal Commonsense Reasoning\n\nThe Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across\n\nlanguages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around\n\nthe globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the\n\ncreation of XCOPA and the implementation of the baselines are available in the paper.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
853f899dad8718f4ef90f45905118113201d1212
|
# nergrit
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,
and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).
The Named Entity Recognition contains 18 entities as follow:
'CRD': Cardinal
'DAT': Date
'EVT': Event
'FAC': Facility
'GPE': Geopolitical Entity
'LAW': Law Entity (such as Undang-Undang)
'LOC': Location
'MON': Money
'NOR': Political Organization
'ORD': Ordinal
'ORG': Organization
'PER': Person
'PRC': Percent
'PRD': Product
'QTY': Quantity
'REG': Religion
'TIM': Time
'WOA': Work of Art
'LAN': Language
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{Fahmi_NERGRIT_CORPUS_2019,
author = {Fahmi, Husni and Wibisono, Yudi and Kusumawati, Riyanti},
title = {{NERGRIT CORPUS}},
url = {https://github.com/grit-id/nergrit-corpus},
year = {2019}
}
```
## License
MIT
## Homepage
[https://github.com/grit-id/nergrit-corpus](https://github.com/grit-id/nergrit-corpus)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nergrit
|
[
"language:ind",
"license:mit",
"named-entity-recognition",
"region:us"
] |
2023-09-26T10:18:07+00:00
|
{"language": ["ind"], "license": "mit", "tags": ["named-entity-recognition"]}
|
2023-09-26T11:35:09+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-mit #named-entity-recognition #region-us
|
# nergrit
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,
and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).
The Named Entity Recognition contains 18 entities as follow:
'CRD': Cardinal
'DAT': Date
'EVT': Event
'FAC': Facility
'GPE': Geopolitical Entity
'LAW': Law Entity (such as Undang-Undang)
'LOC': Location
'MON': Money
'NOR': Political Organization
'ORD': Ordinal
'ORG': Organization
'PER': Person
'PRC': Percent
'PRD': Product
'QTY': Quantity
'REG': Religion
'TIM': Time
'WOA': Work of Art
'LAN': Language
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
MIT
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nergrit\n\nNergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,\n\nand Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).\n\nThe Named Entity Recognition contains 18 entities as follow:\n\n 'CRD': Cardinal\n\n 'DAT': Date\n\n 'EVT': Event\n\n 'FAC': Facility\n\n 'GPE': Geopolitical Entity\n\n 'LAW': Law Entity (such as Undang-Undang)\n\n 'LOC': Location\n\n 'MON': Money\n\n 'NOR': Political Organization\n\n 'ORD': Ordinal\n\n 'ORG': Organization\n\n 'PER': Person\n\n 'PRC': Percent\n\n 'PRD': Product\n\n 'QTY': Quantity\n\n 'REG': Religion\n\n 'TIM': Time\n\n 'WOA': Work of Art\n\n 'LAN': Language",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-mit #named-entity-recognition #region-us \n",
"# nergrit\n\nNergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,\n\nand Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).\n\nThe Named Entity Recognition contains 18 entities as follow:\n\n 'CRD': Cardinal\n\n 'DAT': Date\n\n 'EVT': Event\n\n 'FAC': Facility\n\n 'GPE': Geopolitical Entity\n\n 'LAW': Law Entity (such as Undang-Undang)\n\n 'LOC': Location\n\n 'MON': Money\n\n 'NOR': Political Organization\n\n 'ORD': Ordinal\n\n 'ORG': Organization\n\n 'PER': Person\n\n 'PRC': Percent\n\n 'PRD': Product\n\n 'QTY': Quantity\n\n 'REG': Religion\n\n 'TIM': Time\n\n 'WOA': Work of Art\n\n 'LAN': Language",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
26,
189,
35,
3,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-mit #named-entity-recognition #region-us \n# nergrit\n\nNergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction,\n\nand Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT).\n\nThe Named Entity Recognition contains 18 entities as follow:\n\n 'CRD': Cardinal\n\n 'DAT': Date\n\n 'EVT': Event\n\n 'FAC': Facility\n\n 'GPE': Geopolitical Entity\n\n 'LAW': Law Entity (such as Undang-Undang)\n\n 'LOC': Location\n\n 'MON': Money\n\n 'NOR': Political Organization\n\n 'ORD': Ordinal\n\n 'ORG': Organization\n\n 'PER': Person\n\n 'PRC': Percent\n\n 'PRD': Product\n\n 'QTY': Quantity\n\n 'REG': Religion\n\n 'TIM': Time\n\n 'WOA': Work of Art\n\n 'LAN': Language## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nMIT## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
c8c9db92f9dba4b8d04309d3727f79cce3201532
|
# karonese_sentiment
Karonese sentiment was crawled from Twitter between 1 January 2021 and 31 October 2021.
The first crawling process used several keywords related to the Karonese, such as
"deleng sinabung, Sinabung mountain", "mejuah-juah, greeting welcome", "Gundaling",
and so on. However, due to the insufficient number of tweets obtained using such
keywords, a second crawling process was done based on several hashtags, such as
#kalakkaro, # #antonyginting, and #lyodra.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{karo2022sentiment,
title={Sentiment Analysis in Karonese Tweet using Machine Learning},
author={Karo, Ichwanul Muslim Karo and Fudzee, Mohd Farhan Md and Kasim, Shahreen and Ramli, Azizul Azhar},
journal={Indonesian Journal of Electrical Engineering and Informatics (IJEEI)},
volume={10},
number={1},
pages={219--231},
year={2022}
}
```
## License
Unknown
## Homepage
[http://section.iaesonline.com/index.php/IJEEI/article/view/3565](http://section.iaesonline.com/index.php/IJEEI/article/view/3565)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/karonese_sentiment
|
[
"language:btx",
"license:unknown",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:18:07+00:00
|
{"language": ["btx"], "license": "unknown", "tags": ["sentiment-analysis"]}
|
2023-09-26T11:35:18+00:00
|
[] |
[
"btx"
] |
TAGS
#language-Batak Karo #license-unknown #sentiment-analysis #region-us
|
# karonese_sentiment
Karonese sentiment was crawled from Twitter between 1 January 2021 and 31 October 2021.
The first crawling process used several keywords related to the Karonese, such as
"deleng sinabung, Sinabung mountain", "mejuah-juah, greeting welcome", "Gundaling",
and so on. However, due to the insufficient number of tweets obtained using such
keywords, a second crawling process was done based on several hashtags, such as
#kalakkaro, # #antonyginting, and #lyodra.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# karonese_sentiment\n\nKaronese sentiment was crawled from Twitter between 1 January 2021 and 31 October 2021.\n\nThe first crawling process used several keywords related to the Karonese, such as\n\n\"deleng sinabung, Sinabung mountain\", \"mejuah-juah, greeting welcome\", \"Gundaling\",\n\nand so on. However, due to the insufficient number of tweets obtained using such\n\nkeywords, a second crawling process was done based on several hashtags, such as",
"# #antonyginting, and #lyodra.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Batak Karo #license-unknown #sentiment-analysis #region-us \n",
"# karonese_sentiment\n\nKaronese sentiment was crawled from Twitter between 1 January 2021 and 31 October 2021.\n\nThe first crawling process used several keywords related to the Karonese, such as\n\n\"deleng sinabung, Sinabung mountain\", \"mejuah-juah, greeting welcome\", \"Gundaling\",\n\nand so on. However, due to the insufficient number of tweets obtained using such\n\nkeywords, a second crawling process was done based on several hashtags, such as",
"# #antonyginting, and #lyodra.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
25,
105,
13,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Batak Karo #license-unknown #sentiment-analysis #region-us \n# karonese_sentiment\n\nKaronese sentiment was crawled from Twitter between 1 January 2021 and 31 October 2021.\n\nThe first crawling process used several keywords related to the Karonese, such as\n\n\"deleng sinabung, Sinabung mountain\", \"mejuah-juah, greeting welcome\", \"Gundaling\",\n\nand so on. However, due to the insufficient number of tweets obtained using such\n\nkeywords, a second crawling process was done based on several hashtags, such as# #antonyginting, and #lyodra.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
994beebe4c00b0d503623c00edf963ede3ac24d7
|
# Court of Cassation
[The major decisions of judicial jurisprudence](https://www.data.gouv.fr/en/datasets/cass/); the decisions of the Cour de cassation :
- published in the Bulletin des chambres civiles since 1960
- published in the Bulletin de la chambre criminelle since 1963.
Full text of rulings, supplemented by columns and summaries written by Court of Cassation judges.
|
Nicolas-BZRD/CASS_opendata
|
[
"size_categories:100K<n<1M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] |
2023-09-26T10:27:18+00:00
|
{"language": ["fr"], "license": "odc-by", "size_categories": ["100K<n<1M"], "pretty_name": "Cour de cassation", "tags": ["legal"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 821334132, "num_examples": 142278}], "download_size": 357899718, "dataset_size": 821334132}}
|
2023-09-28T09:30:20+00:00
|
[] |
[
"fr"
] |
TAGS
#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us
|
# Court of Cassation
The major decisions of judicial jurisprudence; the decisions of the Cour de cassation :
- published in the Bulletin des chambres civiles since 1960
- published in the Bulletin de la chambre criminelle since 1963.
Full text of rulings, supplemented by columns and summaries written by Court of Cassation judges.
|
[
"# Court of Cassation\n\nThe major decisions of judicial jurisprudence; the decisions of the Cour de cassation :\n - published in the Bulletin des chambres civiles since 1960\n- published in the Bulletin de la chambre criminelle since 1963.\n\nFull text of rulings, supplemented by columns and summaries written by Court of Cassation judges."
] |
[
"TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n",
"# Court of Cassation\n\nThe major decisions of judicial jurisprudence; the decisions of the Cour de cassation :\n - published in the Bulletin des chambres civiles since 1960\n- published in the Bulletin de la chambre criminelle since 1963.\n\nFull text of rulings, supplemented by columns and summaries written by Court of Cassation judges."
] |
[
34,
76
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #language-French #license-odc-by #legal #region-us \n# Court of Cassation\n\nThe major decisions of judicial jurisprudence; the decisions of the Cour de cassation :\n - published in the Bulletin des chambres civiles since 1960\n- published in the Bulletin de la chambre criminelle since 1963.\n\nFull text of rulings, supplemented by columns and summaries written by Court of Cassation judges."
] |
be2687ef8d4dbbdbd511566ee588a261aac7b22a
|
# smsa
SmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews
in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists
to construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{8904199,
author={Purwarianti, Ayu and Crisdayanti, Ida Ayu Putu Ari},
booktitle={2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
year={2019},
pages={1-5},
doi={10.1109/ICAICTA.2019.8904199}
}
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Wilie, Bryan and Vincentio, Karissa and Winata, Genta Indra and Cahyawijaya, Samuel and Li, Xiaohong and Lim, Zhi Yuan and Soleman, Sidik and Mahendra, Rahmad and Fung, Pascale and Bahar, Syafri and others},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
pages={843--857},
year={2020}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/smsa
|
[
"language:ind",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:31:18+00:00
|
{"language": ["ind"], "tags": ["sentiment-analysis"]}
|
2023-09-26T11:33:48+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #sentiment-analysis #region-us
|
# smsa
SmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews
in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists
to construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# smsa\n\nSmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews\n\nin Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists\n\nto construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #sentiment-analysis #region-us \n",
"# smsa\n\nSmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews\n\nin Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists\n\nto construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
86,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #sentiment-analysis #region-us \n# smsa\n\nSmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews\n\nin Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists\n\nto construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
f1ef20cb03a78ecd15bd1a126ad14f14dab291a3
|
# indonlu_nergrit
This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.
The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and
ORGANIZATION (name of organization).
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
@online{nergrit2019,
title={NERGrit Corpus},
author={NERGrit Developers},
year={2019},
url={https://github.com/grit-id/nergrit-corpus}
}
```
## License
MIT
## Homepage
[https://github.com/grit-id/nergrit-corpus](https://github.com/grit-id/nergrit-corpus)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indonlu_nergrit
|
[
"language:ind",
"license:mit",
"named-entity-recognition",
"region:us"
] |
2023-09-26T10:31:21+00:00
|
{"language": ["ind"], "license": "mit", "tags": ["named-entity-recognition"]}
|
2023-09-26T11:35:26+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-mit #named-entity-recognition #region-us
|
# indonlu_nergrit
This NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.
The dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and
ORGANIZATION (name of organization).
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
MIT
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indonlu_nergrit\n\nThis NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.\n\nThe dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and\n\nORGANIZATION (name of organization).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-mit #named-entity-recognition #region-us \n",
"# indonlu_nergrit\n\nThis NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.\n\nThe dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and\n\nORGANIZATION (name of organization).",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nMIT",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
26,
77,
35,
3,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-mit #named-entity-recognition #region-us \n# indonlu_nergrit\n\nThis NER dataset is taken from the Grit-ID repository, and the labels are spans in IOB chunking representation.\n\nThe dataset consists of three kinds of named entity tags, PERSON (name of person), PLACE (name of location), and\n\nORGANIZATION (name of organization).## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nMIT## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
a6be0d04daf89ffc09cc3af8e08db26ad97a212e
|
# id_wiki_parallel
This dataset is designed for machine translation task, specifically jav->ind, min->ind, sun->ind, and vice versa. The data are taken
from sentences in Wikipedia.
(from the publication abstract)
Parallel corpora are necessary for multilingual researches especially in information retrieval (IR) and natural language processing (NLP). However, such corpora are hard to find, specifically for low-resources languages like ethnic
languages. Parallel corpora of ethnic languages were usually collected manually. On the other hand, Wikipedia as a free online encyclopedia is supporting more and more languages each year, including ethnic languages in Indonesia. It has
become one of the largest multilingual sites in World Wide Web that provides free distributed articles. In this paper, we explore a few sentence alignment methods which have been used before for another domain. We want to check whether
Wikipedia can be used as one of the resources for collecting parallel corpora of Indonesian and Javanese, an ethnic language in Indonesia. We used two approaches of sentence alignment by treating Wikipedia as both parallel corpora and
comparable corpora. In parallel corpora case, we used sentence length based and word correspondence methods. Meanwhile,
we used the characteristics of hypertext links from Wikipedia in comparable corpora case. After the experiments, we can
see that Wikipedia is useful enough for our purpose because both approaches gave positive results.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{
7065828,
author={Trisedya, Bayu Distiawan and Inastra, Dyah},
booktitle={2014 International Conference on Advanced Computer Science and Information System},
title={Creating Indonesian-Javanese parallel corpora using wikipedia articles},
year={2014},
volume={},
number={},
pages={239-245},
doi={10.1109/ICACSIS.2014.7065828}}
```
## License
Unknown
## Homepage
[https://github.com/dindainastra/indowikiparalelcorpora](https://github.com/dindainastra/indowikiparalelcorpora)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/id_wiki_parallel
|
[
"language:ind",
"language:jav",
"language:min",
"language:sun",
"license:unknown",
"machine-translation",
"region:us"
] |
2023-09-26T10:31:21+00:00
|
{"language": ["ind", "jav", "min", "sun"], "license": "unknown", "tags": ["machine-translation"]}
|
2023-09-26T11:35:31+00:00
|
[] |
[
"ind",
"jav",
"min",
"sun"
] |
TAGS
#language-Indonesian #language-Javanese #language-Minangkabau #language-Sundanese #license-unknown #machine-translation #region-us
|
# id_wiki_parallel
This dataset is designed for machine translation task, specifically jav->ind, min->ind, sun->ind, and vice versa. The data are taken
from sentences in Wikipedia.
(from the publication abstract)
Parallel corpora are necessary for multilingual researches especially in information retrieval (IR) and natural language processing (NLP). However, such corpora are hard to find, specifically for low-resources languages like ethnic
languages. Parallel corpora of ethnic languages were usually collected manually. On the other hand, Wikipedia as a free online encyclopedia is supporting more and more languages each year, including ethnic languages in Indonesia. It has
become one of the largest multilingual sites in World Wide Web that provides free distributed articles. In this paper, we explore a few sentence alignment methods which have been used before for another domain. We want to check whether
Wikipedia can be used as one of the resources for collecting parallel corpora of Indonesian and Javanese, an ethnic language in Indonesia. We used two approaches of sentence alignment by treating Wikipedia as both parallel corpora and
comparable corpora. In parallel corpora case, we used sentence length based and word correspondence methods. Meanwhile,
we used the characteristics of hypertext links from Wikipedia in comparable corpora case. After the experiments, we can
see that Wikipedia is useful enough for our purpose because both approaches gave positive results.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# id_wiki_parallel\n\nThis dataset is designed for machine translation task, specifically jav->ind, min->ind, sun->ind, and vice versa. The data are taken\n\nfrom sentences in Wikipedia.\n\n\n\n(from the publication abstract)\n\nParallel corpora are necessary for multilingual researches especially in information retrieval (IR) and natural language processing (NLP). However, such corpora are hard to find, specifically for low-resources languages like ethnic\n\nlanguages. Parallel corpora of ethnic languages were usually collected manually. On the other hand, Wikipedia as a free online encyclopedia is supporting more and more languages each year, including ethnic languages in Indonesia. It has\n\nbecome one of the largest multilingual sites in World Wide Web that provides free distributed articles. In this paper, we explore a few sentence alignment methods which have been used before for another domain. We want to check whether\n\nWikipedia can be used as one of the resources for collecting parallel corpora of Indonesian and Javanese, an ethnic language in Indonesia. We used two approaches of sentence alignment by treating Wikipedia as both parallel corpora and\n\ncomparable corpora. In parallel corpora case, we used sentence length based and word correspondence methods. Meanwhile,\n\nwe used the characteristics of hypertext links from Wikipedia in comparable corpora case. After the experiments, we can\n\nsee that Wikipedia is useful enough for our purpose because both approaches gave positive results.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Javanese #language-Minangkabau #language-Sundanese #license-unknown #machine-translation #region-us \n",
"# id_wiki_parallel\n\nThis dataset is designed for machine translation task, specifically jav->ind, min->ind, sun->ind, and vice versa. The data are taken\n\nfrom sentences in Wikipedia.\n\n\n\n(from the publication abstract)\n\nParallel corpora are necessary for multilingual researches especially in information retrieval (IR) and natural language processing (NLP). However, such corpora are hard to find, specifically for low-resources languages like ethnic\n\nlanguages. Parallel corpora of ethnic languages were usually collected manually. On the other hand, Wikipedia as a free online encyclopedia is supporting more and more languages each year, including ethnic languages in Indonesia. It has\n\nbecome one of the largest multilingual sites in World Wide Web that provides free distributed articles. In this paper, we explore a few sentence alignment methods which have been used before for another domain. We want to check whether\n\nWikipedia can be used as one of the resources for collecting parallel corpora of Indonesian and Javanese, an ethnic language in Indonesia. We used two approaches of sentence alignment by treating Wikipedia as both parallel corpora and\n\ncomparable corpora. In parallel corpora case, we used sentence length based and word correspondence methods. Meanwhile,\n\nwe used the characteristics of hypertext links from Wikipedia in comparable corpora case. After the experiments, we can\n\nsee that Wikipedia is useful enough for our purpose because both approaches gave positive results.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
40,
313,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Javanese #language-Minangkabau #language-Sundanese #license-unknown #machine-translation #region-us \n# id_wiki_parallel\n\nThis dataset is designed for machine translation task, specifically jav->ind, min->ind, sun->ind, and vice versa. The data are taken\n\nfrom sentences in Wikipedia.\n\n\n\n(from the publication abstract)\n\nParallel corpora are necessary for multilingual researches especially in information retrieval (IR) and natural language processing (NLP). However, such corpora are hard to find, specifically for low-resources languages like ethnic\n\nlanguages. Parallel corpora of ethnic languages were usually collected manually. On the other hand, Wikipedia as a free online encyclopedia is supporting more and more languages each year, including ethnic languages in Indonesia. It has\n\nbecome one of the largest multilingual sites in World Wide Web that provides free distributed articles. In this paper, we explore a few sentence alignment methods which have been used before for another domain. We want to check whether\n\nWikipedia can be used as one of the resources for collecting parallel corpora of Indonesian and Javanese, an ethnic language in Indonesia. We used two approaches of sentence alignment by treating Wikipedia as both parallel corpora and\n\ncomparable corpora. In parallel corpora case, we used sentence length based and word correspondence methods. Meanwhile,\n\nwe used the characteristics of hypertext links from Wikipedia in comparable corpora case. After the experiments, we can\n\nsee that Wikipedia is useful enough for our purpose because both approaches gave positive results.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
5c4326a7e59c42db9fcc7e280628b5b29afc2048
|
# Dataset of Koshigaya Natsumi
This is the dataset of Koshigaya Natsumi, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 737 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 824 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 737 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 737 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 604 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 824 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 824 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/koshigaya_natsumi_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T10:39:33+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T18:37:44+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Koshigaya Natsumi
============================
This is the dataset of Koshigaya Natsumi, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
e2fddc9f7148ed3b143cf0ef98abb0995546776a
|
# id_qqp
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs,
and each question pair is annotated with a binary value indicating whether
the two questions are paraphrase of each other. This dataset is translated
version of QQP to Indonesian Language.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{quoraFirstQuora,
author = {},
title = {{F}irst {Q}uora {D}ataset {R}elease: {Q}uestion {P}airs --- quoradata.quora.com},
howpublished = {https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs},
year = 2017,
note = {Online},
}
```
## License
Apache License, Version 2.0
## Homepage
[https://github.com/louisowen6/quora_paraphrasing_id](https://github.com/louisowen6/quora_paraphrasing_id)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/id_qqp
|
[
"language:ind",
"paraphrasing",
"region:us"
] |
2023-09-26T10:41:38+00:00
|
{"language": ["ind"], "tags": ["paraphrasing"]}
|
2023-09-26T11:33:52+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #paraphrasing #region-us
|
# id_qqp
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs,
and each question pair is annotated with a binary value indicating whether
the two questions are paraphrase of each other. This dataset is translated
version of QQP to Indonesian Language.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Apache License, Version 2.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# id_qqp\n\nQuora Question Pairs (QQP) dataset consists of over 400,000 question pairs, \n\nand each question pair is annotated with a binary value indicating whether \n\nthe two questions are paraphrase of each other. This dataset is translated \n\nversion of QQP to Indonesian Language.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nApache License, Version 2.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #paraphrasing #region-us \n",
"# id_qqp\n\nQuora Question Pairs (QQP) dataset consists of over 400,000 question pairs, \n\nand each question pair is annotated with a binary value indicating whether \n\nthe two questions are paraphrase of each other. This dataset is translated \n\nversion of QQP to Indonesian Language.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nApache License, Version 2.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
15,
71,
35,
8,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #paraphrasing #region-us \n# id_qqp\n\nQuora Question Pairs (QQP) dataset consists of over 400,000 question pairs, \n\nand each question pair is annotated with a binary value indicating whether \n\nthe two questions are paraphrase of each other. This dataset is translated \n\nversion of QQP to Indonesian Language.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nApache License, Version 2.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
9d1cfc186d94314090e13b944033880b0b7c5b2f
|
# nerp
The NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels
- PER (name of person)
- LOC (name of location)
- IND (name of product or brand)
- EVT (name of the event)
- FNB (name of food and beverage).
NERP makes use of the IOB chunking format, just like the TermA dataset.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{hoesen2018investigating,
title={Investigating bi-lstm and crf with pos tag embedding for indonesian named entity tagger},
author={Hoesen, Devin and Purwarianti, Ayu},
booktitle={2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/nerp
|
[
"language:ind",
"named-entity-recognition",
"region:us"
] |
2023-09-26T10:41:47+00:00
|
{"language": ["ind"], "tags": ["named-entity-recognition"]}
|
2023-09-26T11:34:00+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #named-entity-recognition #region-us
|
# nerp
The NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels
- PER (name of person)
- LOC (name of location)
- IND (name of product or brand)
- EVT (name of the event)
- FNB (name of food and beverage).
NERP makes use of the IOB chunking format, just like the TermA dataset.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# nerp\n\nThe NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels\n\n- PER (name of person)\n\n- LOC (name of location)\n\n- IND (name of product or brand)\n\n- EVT (name of the event)\n\n- FNB (name of food and beverage).\n\nNERP makes use of the IOB chunking format, just like the TermA dataset.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #named-entity-recognition #region-us \n",
"# nerp\n\nThe NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels\n\n- PER (name of person)\n\n- LOC (name of location)\n\n- IND (name of product or brand)\n\n- EVT (name of the event)\n\n- FNB (name of food and beverage).\n\nNERP makes use of the IOB chunking format, just like the TermA dataset.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
21,
99,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #named-entity-recognition #region-us \n# nerp\n\nThe NERP dataset (Hoesen and Purwarianti, 2018) contains texts collected from several Indonesian news websites with five labels\n\n- PER (name of person)\n\n- LOC (name of location)\n\n- IND (name of product or brand)\n\n- EVT (name of the event)\n\n- FNB (name of food and beverage).\n\nNERP makes use of the IOB chunking format, just like the TermA dataset.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Common Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
3746b1573794c87c78d42e37d5a40469036d7dae
|
# parallel_su_id
This data contains 3616 lines of Sundanese sentences taken from the online Sundanese language magazine Mangle, West Java Dakwah Council, and Balebat, and translated into Indonesian by several students of the Sundanese language study program UPI Bandung.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{7437678,
author={Suryani, Arie Ardiyanti and Widyantoro, Dwi Hendratmo and Purwarianti, Ayu and Sudaryat, Yayat},
booktitle={2015 International Conference on Information Technology Systems and Innovation (ICITSI)},
title={Experiment on a phrase-based statistical machine translation using PoS Tag information for Sundanese into Indonesian},
year={2015},
volume={},
number={},
pages={1-6},
doi={10.1109/ICITSI.2015.7437678}}
```
## License
Creative Commons CC0 - No Rights Reserved
## Homepage
[https://dataverse.telkomuniversity.ac.id/dataset.xhtml?persistentId=doi:10.34820/FK2/HDYWXW](https://dataverse.telkomuniversity.ac.id/dataset.xhtml?persistentId=doi:10.34820/FK2/HDYWXW)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/parallel_su_id
|
[
"language:ind",
"language:sun",
"machine-translation",
"region:us"
] |
2023-09-26T10:41:55+00:00
|
{"language": ["ind", "sun"], "tags": ["machine-translation"]}
|
2023-09-26T11:34:07+00:00
|
[] |
[
"ind",
"sun"
] |
TAGS
#language-Indonesian #language-Sundanese #machine-translation #region-us
|
# parallel_su_id
This data contains 3616 lines of Sundanese sentences taken from the online Sundanese language magazine Mangle, West Java Dakwah Council, and Balebat, and translated into Indonesian by several students of the Sundanese language study program UPI Bandung.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Commons CC0 - No Rights Reserved
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# parallel_su_id\n\nThis data contains 3616 lines of Sundanese sentences taken from the online Sundanese language magazine Mangle, West Java Dakwah Council, and Balebat, and translated into Indonesian by several students of the Sundanese language study program UPI Bandung.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons CC0 - No Rights Reserved",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #language-Sundanese #machine-translation #region-us \n",
"# parallel_su_id\n\nThis data contains 3616 lines of Sundanese sentences taken from the online Sundanese language magazine Mangle, West Java Dakwah Council, and Balebat, and translated into Indonesian by several students of the Sundanese language study program UPI Bandung.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Commons CC0 - No Rights Reserved",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
22,
60,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #language-Sundanese #machine-translation #region-us \n# parallel_su_id\n\nThis data contains 3616 lines of Sundanese sentences taken from the online Sundanese language magazine Mangle, West Java Dakwah Council, and Balebat, and translated into Indonesian by several students of the Sundanese language study program UPI Bandung.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Commons CC0 - No Rights Reserved## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
fb2bfac663cb923d70a3cb74d759ef15ca4ca71d
|
# squad_id
This dataset contains Indonesian SQuAD v2.0 dataset (Google-translated).
The dataset can be used for automatic question generation (AQG),
or machine reading comphrehension(MRC) task.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{muis2020sequence,
title={Sequence-to-sequence learning for indonesian automatic question generator},
author={Muis, Ferdiant Joshua and Purwarianti, Ayu},
booktitle={2020 7th International Conference on Advance Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--6},
year={2020},
organization={IEEE}
}
```
## License
TBD
## Homepage
[https://github.com/FerdiantJoshua/question-generator](https://github.com/FerdiantJoshua/question-generator)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/squad_id
|
[
"language:ind",
"question-answering",
"region:us"
] |
2023-09-26T10:42:04+00:00
|
{"language": ["ind"], "tags": ["question-answering"]}
|
2023-09-26T11:34:15+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #question-answering #region-us
|
# squad_id
This dataset contains Indonesian SQuAD v2.0 dataset (Google-translated).
The dataset can be used for automatic question generation (AQG),
or machine reading comphrehension(MRC) task.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
TBD
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# squad_id\n\nThis dataset contains Indonesian SQuAD v2.0 dataset (Google-translated).\n\n The dataset can be used for automatic question generation (AQG),\n\n or machine reading comphrehension(MRC) task.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nTBD",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #question-answering #region-us \n",
"# squad_id\n\nThis dataset contains Indonesian SQuAD v2.0 dataset (Google-translated).\n\n The dataset can be used for automatic question generation (AQG),\n\n or machine reading comphrehension(MRC) task.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nTBD",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
53,
35,
4,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #question-answering #region-us \n# squad_id\n\nThis dataset contains Indonesian SQuAD v2.0 dataset (Google-translated).\n\n The dataset can be used for automatic question generation (AQG),\n\n or machine reading comphrehension(MRC) task.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nTBD## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
511ec34eb24101a27b34964e19c8a64ee7748d1c
|
# jv_id_tts
This data set contains high-quality transcribed audio data for Javanese.
The data set consists of wave files, and a TSV file.
The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{sodimana18_sltu,
author={Keshan Sodimana and Pasindu {De Silva} and Supheakmungkol Sarin and Oddur Kjartansson and Martin Jansche and Knot Pipatsrisawat and Linne Ha},
title={{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
year=2018,
booktitle={Proc. 6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018)},
pages={66--70},
doi={10.21437/SLTU.2018-14}
}
```
## License
See https://www.openslr.org/resources/41/LICENSE file for license information. Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## Homepage
[http://openslr.org/41/](http://openslr.org/41/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/jv_id_tts
|
[
"language:jav",
"text-to-speech",
"region:us"
] |
2023-09-26T10:42:16+00:00
|
{"language": ["jav"], "tags": ["text-to-speech"]}
|
2023-09-26T11:34:26+00:00
|
[] |
[
"jav"
] |
TAGS
#language-Javanese #text-to-speech #region-us
|
# jv_id_tts
This data set contains high-quality transcribed audio data for Javanese.
The data set consists of wave files, and a TSV file.
The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
See URL file for license information. Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# jv_id_tts\n\nThis data set contains high-quality transcribed audio data for Javanese.\n\nThe data set consists of wave files, and a TSV file.\n\nThe file line_index.tsv contains a filename and the transcription of audio in the file.\n\nEach filename is prepended with a speaker identification number.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nThis dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nSee URL file for license information. Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Javanese #text-to-speech #region-us \n",
"# jv_id_tts\n\nThis data set contains high-quality transcribed audio data for Javanese.\n\nThe data set consists of wave files, and a TSV file.\n\nThe file line_index.tsv contains a filename and the transcription of audio in the file.\n\nEach filename is prepended with a speaker identification number.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nThis dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nSee URL file for license information. Attribution-ShareAlike 4.0 (CC BY-SA 4.0).",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
115,
35,
20,
3,
16
] |
[
"passage: TAGS\n#language-Javanese #text-to-speech #region-us \n# jv_id_tts\n\nThis data set contains high-quality transcribed audio data for Javanese.\n\nThe data set consists of wave files, and a TSV file.\n\nThe file line_index.tsv contains a filename and the transcription of audio in the file.\n\nEach filename is prepended with a speaker identification number.\n\nThe data set has been manually quality checked, but there might still be errors.\n\nThis dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nSee URL file for license information. Attribution-ShareAlike 4.0 (CC BY-SA 4.0).## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
14ae440e20ef2b0eea6d3e9af9eaa040c7c2139e
|
# xpersona_id
XPersona is a multi-lingual extension of Persona-Chat.
XPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{lin2020xpersona,
title={XPersona: Evaluating multilingual personalized chatbot},
author={Lin, Zhaojiang and Liu, Zihan and Winata, Genta Indra and Cahyawijaya, Samuel and Madotto, Andrea and Bang, Yejin and Ishii, Etsuko and Fung, Pascale},
journal={arXiv preprint arXiv:2003.07568},
year={2020}
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898"
}
```
## License
CC-BY-SA 4.0
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/xpersona_id
|
[
"language:ind",
"dialogue-system",
"region:us"
] |
2023-09-26T10:42:21+00:00
|
{"language": ["ind"], "tags": ["dialogue-system"]}
|
2023-09-26T11:34:30+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #dialogue-system #region-us
|
# xpersona_id
XPersona is a multi-lingual extension of Persona-Chat.
XPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC-BY-SA 4.0
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# xpersona_id\n\nXPersona is a multi-lingual extension of Persona-Chat. \n\nXPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-SA 4.0",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #dialogue-system #region-us \n",
"# xpersona_id\n\nXPersona is a multi-lingual extension of Persona-Chat. \n\nXPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC-BY-SA 4.0",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
50,
35,
8,
16
] |
[
"passage: TAGS\n#language-Indonesian #dialogue-system #region-us \n# xpersona_id\n\nXPersona is a multi-lingual extension of Persona-Chat. \n\nXPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC-BY-SA 4.0### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
7ed83c46821f31eb7b49395d798a8d0822584eb0
|
# ud_id_csui
UD Indonesian-CSUI is a conversion from an Indonesian constituency treebank in the Penn Treebank format named Kethu that was also a conversion from a constituency treebank built by Dinakaramani et al. (2015).
This treebank is named after the place where treebanks were built: Faculty of Computer Science (CS), Universitas Indonesia (UI).
About this treebank:
- Genre is news in formal Indonesian (the majority is economic news)
- 1030 sentences (28K words) divided into testing and training dataset of around 10K words and around 18K words respectively.
- Average of 27.4 words per-sentence.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article {10.3844/jcssp.2020.1585.1597,
author = {Alfina, Ika and Budi, Indra and Suhartanto, Heru},
title = {Tree Rotations for Dependency Trees: Converting the Head-Directionality of Noun Phrases},
article_type = {journal},
volume = {16},
number = {11},
year = {2020},
month = {Nov},
pages = {1585-1597},
doi = {10.3844/jcssp.2020.1585.1597},
url = {https://thescipub.com/abstract/jcssp.2020.1585.1597},
journal = {Journal of Computer Science},
publisher = {Science Publications}
}
```
## License
CC BY-SA 4.0
## Homepage
[https://github.com/UniversalDependencies/UD_Indonesian-CSUI](https://github.com/UniversalDependencies/UD_Indonesian-CSUI)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/ud_id_csui
|
[
"language:ind",
"dependency-parsing",
"machine-translation",
"pos-tagging",
"region:us"
] |
2023-09-26T10:42:25+00:00
|
{"language": ["ind"], "tags": ["dependency-parsing", "machine-translation", "pos-tagging"]}
|
2023-09-26T11:34:34+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #dependency-parsing #machine-translation #pos-tagging #region-us
|
# ud_id_csui
UD Indonesian-CSUI is a conversion from an Indonesian constituency treebank in the Penn Treebank format named Kethu that was also a conversion from a constituency treebank built by Dinakaramani et al. (2015).
This treebank is named after the place where treebanks were built: Faculty of Computer Science (CS), Universitas Indonesia (UI).
About this treebank:
- Genre is news in formal Indonesian (the majority is economic news)
- 1030 sentences (28K words) divided into testing and training dataset of around 10K words and around 18K words respectively.
- Average of 27.4 words per-sentence.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC BY-SA 4.0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# ud_id_csui\n\nUD Indonesian-CSUI is a conversion from an Indonesian constituency treebank in the Penn Treebank format named Kethu that was also a conversion from a constituency treebank built by Dinakaramani et al. (2015).\n\nThis treebank is named after the place where treebanks were built: Faculty of Computer Science (CS), Universitas Indonesia (UI).\n\n\n\nAbout this treebank:\n\n- Genre is news in formal Indonesian (the majority is economic news)\n\n- 1030 sentences (28K words) divided into testing and training dataset of around 10K words and around 18K words respectively.\n\n- Average of 27.4 words per-sentence.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC BY-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #dependency-parsing #machine-translation #pos-tagging #region-us \n",
"# ud_id_csui\n\nUD Indonesian-CSUI is a conversion from an Indonesian constituency treebank in the Penn Treebank format named Kethu that was also a conversion from a constituency treebank built by Dinakaramani et al. (2015).\n\nThis treebank is named after the place where treebanks were built: Faculty of Computer Science (CS), Universitas Indonesia (UI).\n\n\n\nAbout this treebank:\n\n- Genre is news in formal Indonesian (the majority is economic news)\n\n- 1030 sentences (28K words) divided into testing and training dataset of around 10K words and around 18K words respectively.\n\n- Average of 27.4 words per-sentence.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC BY-SA 4.0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
28,
150,
35,
7,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #dependency-parsing #machine-translation #pos-tagging #region-us \n# ud_id_csui\n\nUD Indonesian-CSUI is a conversion from an Indonesian constituency treebank in the Penn Treebank format named Kethu that was also a conversion from a constituency treebank built by Dinakaramani et al. (2015).\n\nThis treebank is named after the place where treebanks were built: Faculty of Computer Science (CS), Universitas Indonesia (UI).\n\n\n\nAbout this treebank:\n\n- Genre is news in formal Indonesian (the majority is economic news)\n\n- 1030 sentences (28K words) divided into testing and training dataset of around 10K words and around 18K words respectively.\n\n- Average of 27.4 words per-sentence.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC BY-SA 4.0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
fc7477b2a3c270478ac7367de2846cfaa784d5db
|
# IndQNER
IndQNER is a NER dataset created by manually annotating the Indonesian translation of Quran text.
The dataset contains 18 named entity categories as follow:
"Allah": Allah (including synonim of Allah such as Yang maha mengetahui lagi mahabijaksana)
"Throne": Throne of Allah (such as 'Arasy)
"Artifact": Artifact (such as Ka'bah, Baitullah)
"AstronomicalBody": Astronomical body (such as bumi, matahari)
"Event": Event (such as hari akhir, kiamat)
"HolyBook": Holy book (such as AlQur'an)
"Language": Language (such as bahasa Arab
"Angel": Angel (such as Jibril, Mikail)
"Person": Person (such as Bani Israil, Fir'aun)
"Messenger": Messenger (such as Isa, Muhammad, Musa)
"Prophet": Prophet (such as Adam, Sulaiman)
"AfterlifeLocation": Afterlife location (such as Jahanam, Jahim, Padang Mahsyar)
"GeographicalLocation": Geographical location (such as Sinai, negeru Babilonia)
"Color": Color (such as kuning tua)
"Religion": Religion (such as Islam, Yahudi, Nasrani)
"Food": Food (such as manna, salwa)
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{,
author = {Ria Hari Gusmita, Asep Fajar Firmansyah, Khodijah Khuliyah},
title = {{IndQNER: a NER Benchmark Dataset on Indonesian Translation of Quran}},
url = {https://github.com/dice-group/IndQNER},
year = {2022}
}
```
## License
Unknown
## Homepage
[https://github.com/dice-group/IndQNER](https://github.com/dice-group/IndQNER)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indqner
|
[
"language:ind",
"license:unknown",
"named-entity-recognition",
"region:us"
] |
2023-09-26T10:42:30+00:00
|
{"language": ["ind"], "license": "unknown", "tags": ["named-entity-recognition"]}
|
2023-09-26T11:34:43+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-unknown #named-entity-recognition #region-us
|
# IndQNER
IndQNER is a NER dataset created by manually annotating the Indonesian translation of Quran text.
The dataset contains 18 named entity categories as follow:
"Allah": Allah (including synonim of Allah such as Yang maha mengetahui lagi mahabijaksana)
"Throne": Throne of Allah (such as 'Arasy)
"Artifact": Artifact (such as Ka'bah, Baitullah)
"AstronomicalBody": Astronomical body (such as bumi, matahari)
"Event": Event (such as hari akhir, kiamat)
"HolyBook": Holy book (such as AlQur'an)
"Language": Language (such as bahasa Arab
"Angel": Angel (such as Jibril, Mikail)
"Person": Person (such as Bani Israil, Fir'aun)
"Messenger": Messenger (such as Isa, Muhammad, Musa)
"Prophet": Prophet (such as Adam, Sulaiman)
"AfterlifeLocation": Afterlife location (such as Jahanam, Jahim, Padang Mahsyar)
"GeographicalLocation": Geographical location (such as Sinai, negeru Babilonia)
"Color": Color (such as kuning tua)
"Religion": Religion (such as Islam, Yahudi, Nasrani)
"Food": Food (such as manna, salwa)
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# IndQNER\n\nIndQNER is a NER dataset created by manually annotating the Indonesian translation of Quran text.\n\nThe dataset contains 18 named entity categories as follow:\n\n \"Allah\": Allah (including synonim of Allah such as Yang maha mengetahui lagi mahabijaksana)\n\n \"Throne\": Throne of Allah (such as 'Arasy)\n\n \"Artifact\": Artifact (such as Ka'bah, Baitullah)\n\n \"AstronomicalBody\": Astronomical body (such as bumi, matahari)\n\n \"Event\": Event (such as hari akhir, kiamat)\n\n \"HolyBook\": Holy book (such as AlQur'an)\n\n \"Language\": Language (such as bahasa Arab\n\n \"Angel\": Angel (such as Jibril, Mikail)\n\n \"Person\": Person (such as Bani Israil, Fir'aun)\n\n \"Messenger\": Messenger (such as Isa, Muhammad, Musa)\n\n \"Prophet\": Prophet (such as Adam, Sulaiman)\n\n \"AfterlifeLocation\": Afterlife location (such as Jahanam, Jahim, Padang Mahsyar)\n\n \"GeographicalLocation\": Geographical location (such as Sinai, negeru Babilonia)\n\n \"Color\": Color (such as kuning tua)\n\n \"Religion\": Religion (such as Islam, Yahudi, Nasrani)\n\n \"Food\": Food (such as manna, salwa)",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-unknown #named-entity-recognition #region-us \n",
"# IndQNER\n\nIndQNER is a NER dataset created by manually annotating the Indonesian translation of Quran text.\n\nThe dataset contains 18 named entity categories as follow:\n\n \"Allah\": Allah (including synonim of Allah such as Yang maha mengetahui lagi mahabijaksana)\n\n \"Throne\": Throne of Allah (such as 'Arasy)\n\n \"Artifact\": Artifact (such as Ka'bah, Baitullah)\n\n \"AstronomicalBody\": Astronomical body (such as bumi, matahari)\n\n \"Event\": Event (such as hari akhir, kiamat)\n\n \"HolyBook\": Holy book (such as AlQur'an)\n\n \"Language\": Language (such as bahasa Arab\n\n \"Angel\": Angel (such as Jibril, Mikail)\n\n \"Person\": Person (such as Bani Israil, Fir'aun)\n\n \"Messenger\": Messenger (such as Isa, Muhammad, Musa)\n\n \"Prophet\": Prophet (such as Adam, Sulaiman)\n\n \"AfterlifeLocation\": Afterlife location (such as Jahanam, Jahim, Padang Mahsyar)\n\n \"GeographicalLocation\": Geographical location (such as Sinai, negeru Babilonia)\n\n \"Color\": Color (such as kuning tua)\n\n \"Religion\": Religion (such as Islam, Yahudi, Nasrani)\n\n \"Food\": Food (such as manna, salwa)",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
28,
328,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-unknown #named-entity-recognition #region-us \n# IndQNER\n\nIndQNER is a NER dataset created by manually annotating the Indonesian translation of Quran text.\n\nThe dataset contains 18 named entity categories as follow:\n\n \"Allah\": Allah (including synonim of Allah such as Yang maha mengetahui lagi mahabijaksana)\n\n \"Throne\": Throne of Allah (such as 'Arasy)\n\n \"Artifact\": Artifact (such as Ka'bah, Baitullah)\n\n \"AstronomicalBody\": Astronomical body (such as bumi, matahari)\n\n \"Event\": Event (such as hari akhir, kiamat)\n\n \"HolyBook\": Holy book (such as AlQur'an)\n\n \"Language\": Language (such as bahasa Arab\n\n \"Angel\": Angel (such as Jibril, Mikail)\n\n \"Person\": Person (such as Bani Israil, Fir'aun)\n\n \"Messenger\": Messenger (such as Isa, Muhammad, Musa)\n\n \"Prophet\": Prophet (such as Adam, Sulaiman)\n\n \"AfterlifeLocation\": Afterlife location (such as Jahanam, Jahim, Padang Mahsyar)\n\n \"GeographicalLocation\": Geographical location (such as Sinai, negeru Babilonia)\n\n \"Color\": Color (such as kuning tua)\n\n \"Religion\": Religion (such as Islam, Yahudi, Nasrani)\n\n \"Food\": Food (such as manna, salwa)## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
b871f79852b592e788f5aee353c9290a0c1a4588
|
# indotacos
Predicting the outcome or the probability of winning a legal case has always been highly attractive in legal sciences and practice.
Hardly any dataset has been developed to analyze and accelerate the research of court verdict analysis.
Find out what factor affects the outcome of tax court verdict using Natural Language Processing.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{wibisono2022indotacos,
title = {IndoTacos},
howpublished = {\url{https://www.kaggle.com/datasets/christianwbsn/indonesia-tax-court-verdict}},
note = {Accessed: 2022-09-22}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://www.kaggle.com/datasets/christianwbsn/indonesia-tax-court-verdict](https://www.kaggle.com/datasets/christianwbsn/indonesia-tax-court-verdict)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indotacos
|
[
"language:ind",
"tax-court-verdict",
"region:us"
] |
2023-09-26T10:42:33+00:00
|
{"language": ["ind"], "tags": ["tax-court-verdict"]}
|
2023-09-26T11:34:47+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #tax-court-verdict #region-us
|
# indotacos
Predicting the outcome or the probability of winning a legal case has always been highly attractive in legal sciences and practice.
Hardly any dataset has been developed to analyze and accelerate the research of court verdict analysis.
Find out what factor affects the outcome of tax court verdict using Natural Language Processing.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indotacos\n\nPredicting the outcome or the probability of winning a legal case has always been highly attractive in legal sciences and practice.\n\nHardly any dataset has been developed to analyze and accelerate the research of court verdict analysis.\n\nFind out what factor affects the outcome of tax court verdict using Natural Language Processing.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #tax-court-verdict #region-us \n",
"# indotacos\n\nPredicting the outcome or the probability of winning a legal case has always been highly attractive in legal sciences and practice.\n\nHardly any dataset has been developed to analyze and accelerate the research of court verdict analysis.\n\nFind out what factor affects the outcome of tax court verdict using Natural Language Processing.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
18,
73,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #tax-court-verdict #region-us \n# indotacos\n\nPredicting the outcome or the probability of winning a legal case has always been highly attractive in legal sciences and practice.\n\nHardly any dataset has been developed to analyze and accelerate the research of court verdict analysis.\n\nFind out what factor affects the outcome of tax court verdict using Natural Language Processing.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Common Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
438621c658ab85456f5770224618d01f35e5f564
|
# indo_law
This study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents.
We propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts.
Our results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{nuranti2022predicting,
title={Predicting the Category and the Length of Punishment in Indonesian Courts Based on Previous Court Decision Documents},
author={Nuranti, Eka Qadri and Yulianti, Evi and Husin, Husna Sarirah},
journal={Computers},
volume={11},
number={6},
pages={88},
year={2022},
publisher={Multidisciplinary Digital Publishing Institute}
}
```
## License
Unknown
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/indo_law
|
[
"language:ind",
"license:unknown",
"legal-classification",
"region:us"
] |
2023-09-26T10:42:37+00:00
|
{"language": ["ind"], "license": "unknown", "tags": ["legal-classification"]}
|
2023-09-26T11:34:50+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-unknown #legal-classification #region-us
|
# indo_law
This study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents.
We propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts.
Our results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# indo_law\n\nThis study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents. \n\nWe propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts. \n\nOur results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-unknown #legal-classification #region-us \n",
"# indo_law\n\nThis study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents. \n\nWe propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts. \n\nOur results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
23,
109,
35,
5,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-unknown #legal-classification #region-us \n# indo_law\n\nThis study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents. \n\nWe propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts. \n\nOur results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
dbe0267d6f46d59ec717198e49bf0743f88c2864
|
# keps
The KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter
discussing banking products and services and is written in the Indonesian language. A phrase
containing important information is considered a keyphrase. Text may contain one or more
keyphrases since important phrases can be located at different positions.
- tokens: a list of string features.
- seq_label: a list of classification labels, with possible values including O, B, I.
The labels use Inside-Outside-Beginning (IOB) tagging.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{mahfuzh2019improving,
title={Improving Joint Layer RNN based Keyphrase Extraction by Using Syntactical Features},
author={Miftahul Mahfuzh, Sidik Soleman, and Ayu Purwarianti},
booktitle={Proceedings of the 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
pages={1--6},
year={2019},
organization={IEEE}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/keps
|
[
"language:ind",
"keyword-extraction",
"region:us"
] |
2023-09-26T10:42:43+00:00
|
{"language": ["ind"], "tags": ["keyword-extraction"]}
|
2023-09-26T11:34:57+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #keyword-extraction #region-us
|
# keps
The KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter
discussing banking products and services and is written in the Indonesian language. A phrase
containing important information is considered a keyphrase. Text may contain one or more
keyphrases since important phrases can be located at different positions.
- tokens: a list of string features.
- seq_label: a list of classification labels, with possible values including O, B, I.
The labels use Inside-Outside-Beginning (IOB) tagging.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# keps\n\nThe KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter\n\ndiscussing banking products and services and is written in the Indonesian language. A phrase\n\ncontaining important information is considered a keyphrase. Text may contain one or more\n\nkeyphrases since important phrases can be located at different positions.\n\n- tokens: a list of string features.\n\n- seq_label: a list of classification labels, with possible values including O, B, I.\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #keyword-extraction #region-us \n",
"# keps\n\nThe KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter\n\ndiscussing banking products and services and is written in the Indonesian language. A phrase\n\ncontaining important information is considered a keyphrase. Text may contain one or more\n\nkeyphrases since important phrases can be located at different positions.\n\n- tokens: a list of string features.\n\n- seq_label: a list of classification labels, with possible values including O, B, I.\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCreative Common Attribution Share-Alike 4.0 International",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
17,
132,
35,
10,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #keyword-extraction #region-us \n# keps\n\nThe KEPS dataset (Mahfuzh, Soleman and Purwarianti, 2019) consists of text from Twitter\n\ndiscussing banking products and services and is written in the Indonesian language. A phrase\n\ncontaining important information is considered a keyphrase. Text may contain one or more\n\nkeyphrases since important phrases can be located at different positions.\n\n- tokens: a list of string features.\n\n- seq_label: a list of classification labels, with possible values including O, B, I.\n\nThe labels use Inside-Outside-Beginning (IOB) tagging.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCreative Common Attribution Share-Alike 4.0 International## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
1e12d514a34f0c580a3bb8bc43294d185fb57778
|
# librivox_indonesia
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox.
We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours.
Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed.
It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau.
We can also use it for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia.
We will add more languages or audio files as we collect them.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{
research,
title={indonesian-nlp/librivox-indonesia · datasets at hugging face},
url={https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia},
author={Indonesian-nlp}
}
```
## License
CC0
## Homepage
[https://huggingface.co/indonesian-nlp/librivox-indonesia](https://huggingface.co/indonesian-nlp/librivox-indonesia)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/librivox_indonesia
|
[
"language:min",
"language:bug",
"language:ind",
"language:ban",
"language:ace",
"language:sun",
"language:jav",
"speech-recognition",
"region:us"
] |
2023-09-26T10:42:47+00:00
|
{"language": ["min", "bug", "ind", "ban", "ace", "sun", "jav"], "tags": ["speech-recognition"]}
|
2023-09-26T11:35:01+00:00
|
[] |
[
"min",
"bug",
"ind",
"ban",
"ace",
"sun",
"jav"
] |
TAGS
#language-Minangkabau #language-Buginese #language-Indonesian #language-Balinese #language-Achinese #language-Sundanese #language-Javanese #speech-recognition #region-us
|
# librivox_indonesia
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox.
We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours.
Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed.
It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau.
We can also use it for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia.
We will add more languages or audio files as we collect them.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
CC0
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# librivox_indonesia\n\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox. \n\nWe collected only languages in Indonesia for this dataset. \n\nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. \n\nEach audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.\n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. \n\nIt supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. \n\nWe can also use it for other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. \n\nWe will add more languages or audio files as we collect them.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Minangkabau #language-Buginese #language-Indonesian #language-Balinese #language-Achinese #language-Sundanese #language-Javanese #speech-recognition #region-us \n",
"# librivox_indonesia\n\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox. \n\nWe collected only languages in Indonesia for this dataset. \n\nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. \n\nEach audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.\n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. \n\nIt supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. \n\nWe can also use it for other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. \n\nWe will add more languages or audio files as we collect them.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nCC0",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
53,
195,
35,
4,
3,
16
] |
[
"passage: TAGS\n#language-Minangkabau #language-Buginese #language-Indonesian #language-Balinese #language-Achinese #language-Sundanese #language-Javanese #speech-recognition #region-us \n# librivox_indonesia\n\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox. \n\nWe collected only languages in Indonesia for this dataset. \n\nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. \n\nEach audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.\n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. \n\nIt supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. \n\nWe can also use it for other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. \n\nWe will add more languages or audio files as we collect them.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nCC0## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
8c6bf40625b51acaed57e7d76a4d219a1859e6af
|
# sentiment_nathasa_review
Customer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{nurlaila2018classification,
title={CLASSIFICATION OF CUSTOMERS EMOTION USING NA{"I}VE BAYES CLASSIFIER (Case Study: Natasha Skin Care)},
author={Nurlaila, Afifah and Wiranto, Wiranto and Saptono, Ristu},
journal={ITSMART: Jurnal Teknologi dan Informasi},
volume={6},
number={2},
pages={92--97},
year={2018}
}
```
## License
Unknown
## Homepage
[https://jurnal.uns.ac.id/itsmart/article/viewFile/17328/15082](https://jurnal.uns.ac.id/itsmart/article/viewFile/17328/15082)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
|
SEACrowd/sentiment_nathasa_review
|
[
"language:ind",
"license:unknown",
"sentiment-analysis",
"region:us"
] |
2023-09-26T10:42:52+00:00
|
{"language": ["ind"], "license": "unknown", "tags": ["sentiment-analysis"]}
|
2023-09-26T11:35:04+00:00
|
[] |
[
"ind"
] |
TAGS
#language-Indonesian #license-unknown #sentiment-analysis #region-us
|
# sentiment_nathasa_review
Customer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.
## Dataset Usage
Run 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.
## License
Unknown
## Homepage
URL
### NusaCatalogue
For easy indexing and metadata: URL
|
[
"# sentiment_nathasa_review\n\nCustomer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
"TAGS\n#language-Indonesian #license-unknown #sentiment-analysis #region-us \n",
"# sentiment_nathasa_review\n\nCustomer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.",
"## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.",
"## License\n\nUnknown",
"## Homepage\n\nURL",
"### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
[
24,
69,
35,
5,
3,
16
] |
[
"passage: TAGS\n#language-Indonesian #license-unknown #sentiment-analysis #region-us \n# sentiment_nathasa_review\n\nCustomer Review (Natasha Skincare) is a customers emotion dataset, with amounted to 19,253 samples with the division for each class is 804 joy, 43 surprise, 154 anger, 61 fear, 287 sad, 167 disgust, and 17736 no-emotions.## Dataset Usage\n\nRun 'pip install nusacrowd' before loading the dataset through HuggingFace's 'load_dataset'.## License\n\nUnknown## Homepage\n\nURL### NusaCatalogue\n\nFor easy indexing and metadata: URL"
] |
33429673cf63a7bced1f3f9f5de4a677d3f7246f
|
# Dataset of uehara_himari/上原ひまり (BanG Dream!)
This is the dataset of uehara_himari/上原ひまり (BanG Dream!), containing 325 images and their tags.
The core tags of this character are `bangs, pink_hair, green_eyes, twintails, low_twintails, breasts, medium_hair, long_hair, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 325 | 357.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_himari_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 325 | 211.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_himari_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 681 | 434.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_himari_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 325 | 317.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_himari_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 681 | 621.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uehara_himari_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/uehara_himari_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | padlock, striped, bowtie, cleavage, ghost_costume, hood_up, looking_at_viewer, 1girl, blush, open_mouth, pink_bow, smile, solo, belt, blunt_bangs, medium_breasts, navel, upper_body |
| 1 | 8 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, pleated_skirt, solo, black_gloves, blush, fingerless_gloves, hair_ribbon, hairband, red_skirt, black_choker, black_jacket, midriff, miniskirt, necklace, black_shirt, crop_top, cross-laced_clothes, open_jacket, open_mouth, belt, black_ribbon, frills, navel, simple_background, standing, white_background, :d, bass_guitar, cleavage, collarbone, cowboy_shot, electric_guitar, heart, one_eye_closed, upper_teeth_only |
| 2 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, collarbone, simple_background, white_shirt, open_mouth, cleavage, long_sleeves, white_background, upper_body, :d, plaid_skirt, high-waist_skirt, short_twintails |
| 3 | 6 |  |  |  |  |  | blue_skirt, blush, collared_shirt, looking_at_viewer, plaid_skirt, pleated_skirt, school_uniform, white_shirt, 1girl, blue_necktie, miniskirt, open_mouth, solo, striped_necktie, blazer, long_sleeves, :d, arm_up, brown_footwear, grey_jacket, shoes, socks, standing |
| 4 | 8 |  |  |  |  |  | 1girl, blunt_bangs, blush, looking_at_viewer, solo, open_mouth, short_twintails, simple_background, cleft_of_venus, completely_nude, navel, pussy, smile, stomach, white_background, one_eye_closed, uncensored, ;d, armpits, collarbone, cowboy_shot, groin, sweat, fingernails, hair_tie, heart, puffy_nipples |
| 5 | 7 |  |  |  |  |  | plaid_dress, 1girl, black_shirt, blush, long_sleeves, smile, solo, heart_necklace, turtleneck, brown_dress, one_eye_closed, open_mouth, pinafore_dress, upper_body, ;d, looking_at_viewer, red_dress, simple_background, white_background |
| 6 | 5 |  |  |  |  |  | 1girl, chain_necklace, short_sleeves, baseball_cap, black_choker, black_headwear, blush, crop_top, looking_at_viewer, open_mouth, solo, white_shirt, :d, arm_belt, black_bra, cleavage, midriff, navel, short_twintails, upper_body, arm_strap, collarbone, earrings, jacket_around_waist, see-through_shirt, simple_background, skirt, stomach, white_background |
| 7 | 10 |  |  |  |  |  | 1boy, blush, hetero, nipples, spread_legs, 1girl, penis, solo_focus, sweat, collarbone, mosaic_censoring, open_mouth, vaginal, looking_at_viewer, bed_sheet, cum_in_pussy, indoors, navel, on_back, short_twintails, completely_nude, overflow, breasts_out, clothed_female_nude_male, clothed_sex, collared_shirt, groin, miniskirt, missionary, motion_lines, on_bed, open_shirt, pleated_skirt, pov, saliva, school_uniform, skirt_lift, stomach, trembling, underwear, white_shirt |
| 8 | 5 |  |  |  |  |  | 1girl, blush, long_sleeves, open_mouth, yukata, blue_kimono, crown_braid, floral_print, hair_flower, obi, solo, wide_sleeves, ;d, blue_flower, holding, looking_at_viewer, one_eye_closed, upper_teeth_only, :d, ^_^, alternate_hairstyle, floral_background, looking_back, standing, striped_kimono, sunflower, upper_body, v-shaped_eyebrows, vertical_stripes, yellow_flower |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | padlock | striped | bowtie | cleavage | ghost_costume | hood_up | looking_at_viewer | 1girl | blush | open_mouth | pink_bow | smile | solo | belt | blunt_bangs | medium_breasts | navel | upper_body | long_sleeves | pleated_skirt | black_gloves | fingerless_gloves | hair_ribbon | hairband | red_skirt | black_choker | black_jacket | midriff | miniskirt | necklace | black_shirt | crop_top | cross-laced_clothes | open_jacket | black_ribbon | frills | simple_background | standing | white_background | :d | bass_guitar | collarbone | cowboy_shot | electric_guitar | heart | one_eye_closed | upper_teeth_only | white_shirt | plaid_skirt | high-waist_skirt | short_twintails | blue_skirt | collared_shirt | school_uniform | blue_necktie | striped_necktie | blazer | arm_up | brown_footwear | grey_jacket | shoes | socks | cleft_of_venus | completely_nude | pussy | stomach | uncensored | ;d | armpits | groin | sweat | fingernails | hair_tie | puffy_nipples | plaid_dress | heart_necklace | turtleneck | brown_dress | pinafore_dress | red_dress | chain_necklace | short_sleeves | baseball_cap | black_headwear | arm_belt | black_bra | arm_strap | earrings | jacket_around_waist | see-through_shirt | skirt | 1boy | hetero | nipples | spread_legs | penis | solo_focus | mosaic_censoring | vaginal | bed_sheet | cum_in_pussy | indoors | on_back | overflow | breasts_out | clothed_female_nude_male | clothed_sex | missionary | motion_lines | on_bed | open_shirt | pov | saliva | skirt_lift | trembling | underwear | yukata | blue_kimono | crown_braid | floral_print | hair_flower | obi | wide_sleeves | blue_flower | holding | ^_^ | alternate_hairstyle | floral_background | looking_back | striped_kimono | sunflower | v-shaped_eyebrows | vertical_stripes | yellow_flower |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------|:----------|:---------|:-----------|:----------------|:----------|:--------------------|:--------|:--------|:-------------|:-----------|:--------|:-------|:-------|:--------------|:-----------------|:--------|:-------------|:---------------|:----------------|:---------------|:--------------------|:--------------|:-----------|:------------|:---------------|:---------------|:----------|:------------|:-----------|:--------------|:-----------|:----------------------|:--------------|:---------------|:---------|:--------------------|:-----------|:-------------------|:-----|:--------------|:-------------|:--------------|:------------------|:--------|:-----------------|:-------------------|:--------------|:--------------|:-------------------|:------------------|:-------------|:-----------------|:-----------------|:---------------|:------------------|:---------|:---------|:-----------------|:--------------|:--------|:--------|:-----------------|:------------------|:--------|:----------|:-------------|:-----|:----------|:--------|:--------|:--------------|:-----------|:----------------|:--------------|:-----------------|:-------------|:--------------|:-----------------|:------------|:-----------------|:----------------|:---------------|:-----------------|:-----------|:------------|:------------|:-----------|:----------------------|:--------------------|:--------|:-------|:---------|:----------|:--------------|:--------|:-------------|:-------------------|:----------|:------------|:---------------|:----------|:----------|:-----------|:--------------|:---------------------------|:--------------|:-------------|:---------------|:---------|:-------------|:------|:---------|:-------------|:------------|:------------|:---------|:--------------|:--------------|:---------------|:--------------|:------|:---------------|:--------------|:----------|:------|:----------------------|:--------------------|:---------------|:-----------------|:------------|:--------------------|:-------------------|:----------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | | | | X | | | X | X | X | X | | | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | | | | X | | | X | X | X | X | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | X | | X | X | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | | | | | | | X | X | X | X | | | X | | | | | | X | X | | | | | | | | | X | | | | | | | | | X | | X | | | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | | | | | | | X | X | X | X | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | X | | X | | | X | X | | X | X | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | | | | | | | X | X | X | X | | X | X | | | | | X | X | | | | | | | | | | | | X | | | | | | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | | | | X | | | X | X | X | X | | | X | | | | X | X | | | | | | | | X | | X | | | | X | | | | | X | | X | X | | X | | | | | | X | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 10 |  |  |  |  |  | | | | | | | X | X | X | X | | | | | | | X | | | X | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | X | | | X | | X | X | | | | | | | | | | X | | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | | | | | | | X | X | X | X | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/uehara_himari_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T10:47:35+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:55:49+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of uehara\_himari/上原ひまり (BanG Dream!)
=============================================
This is the dataset of uehara\_himari/上原ひまり (BanG Dream!), containing 325 images and their tags.
The core tags of this character are 'bangs, pink\_hair, green\_eyes, twintails, low\_twintails, breasts, medium\_hair, long\_hair, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
5bdde13a6fbdf68e2fdc92dfd4846c7331bbe0dc
|
# Dataset of Koshigaya Komari
This is the dataset of Koshigaya Komari, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 746 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 833 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 746 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 746 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 584 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 833 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 833 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/koshigaya_komari_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T11:18:31+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T19:18:56+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Koshigaya Komari
===========================
This is the dataset of Koshigaya Komari, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
f31519f6fa0cba6105ec63b4ca2d8bda817d67ee
|
# Dataset Card for "zx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sc3069/zx
|
[
"region:us"
] |
2023-09-26T11:27:13+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10329536, "num_examples": 350}], "download_size": 1991265, "dataset_size": 10329536}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-27T08:47:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "zx"
More Information needed
|
[
"# Dataset Card for \"zx\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"zx\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"zx\"\n\nMore Information needed"
] |
8ff3a1d869f923c1877ece2bde4734c9aefe683b
|
# Dataset of udagawa_ako/宇田川あこ (BanG Dream!)
This is the dataset of udagawa_ako/宇田川あこ (BanG Dream!), containing 237 images and their tags.
The core tags of this character are `purple_hair, bangs, red_eyes, twintails, sidelocks, long_hair, fang, v-shaped_eyebrows, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 237 | 279.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_ako_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 237 | 184.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_ako_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 533 | 365.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_ako_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 237 | 250.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_ako_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 533 | 486.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_ako_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/udagawa_ako_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | black_feathers, dress, feather_hair_ornament, hair_flower, looking_at_viewer, 1girl, solo, open_mouth, ribbon, short_sleeves, :d, black_rose, blue_rose, bowtie, choker, detached_sleeves, frills, black_bow, holding, brooch, earrings, hair_bow, arm_warmers, black_sleeves, simple_background, white_background |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, open_mouth, side_ponytail, :d, black_choker, collarbone, long_sleeves, open_jacket, upper_body, white_shirt, blunt_bangs, blush, full_body, red_footwear, shiny_hair, simple_background, sneakers, standing, white_background |
| 2 | 22 |  |  |  |  |  | school_uniform, 1girl, collared_shirt, solo, white_shirt, open_mouth, long_sleeves, :d, blazer, striped_necktie, blush, pleated_skirt, looking_at_viewer, simple_background, plaid_skirt, white_background, black_bow, grey_jacket, hair_bow, upper_body, blunt_bangs, green_necktie |
| 3 | 5 |  |  |  |  |  | 1girl, detached_sleeves, looking_at_viewer, red_necktie, solo, :d, open_mouth, upper_body, bare_shoulders, simple_background, white_background, arm_belt, detached_collar, signature, skirt, striped, upper_teeth_only |
| 4 | 5 |  |  |  |  |  | 1girl, frilled_skirt, looking_at_viewer, solo, white_background, detached_sleeves, plaid_skirt, simple_background, :d, black_shirt, chain, garter_straps, hair_bow, layered_skirt, open_mouth, red_necktie, black_bow, blunt_bangs, grin, jewelry, long_sleeves, miniskirt, standing, striped_thighhighs, v_over_eye |
| 5 | 8 |  |  |  |  |  | blush, solo_focus, navel, nipples, open_mouth, 1girl, 1boy, completely_nude, hetero, mosaic_censoring, pussy, small_breasts, collarbone, penis, spread_legs, 2girls, :d, blunt_bangs, looking_at_viewer, medium_breasts, sex, sitting, sweat, vaginal |
| 6 | 6 |  |  |  |  |  | collarbone, one_eye_closed, smile, 1girl, ;d, bare_arms, bare_shoulders, blush, looking_at_viewer, navel, open_mouth, standing, frilled_bikini, groin, cowboy_shot, multiple_girls, simple_background, small_breasts, solo_focus, stomach, striped_bikini |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | black_feathers | dress | feather_hair_ornament | hair_flower | looking_at_viewer | 1girl | solo | open_mouth | ribbon | short_sleeves | :d | black_rose | blue_rose | bowtie | choker | detached_sleeves | frills | black_bow | holding | brooch | earrings | hair_bow | arm_warmers | black_sleeves | simple_background | white_background | side_ponytail | black_choker | collarbone | long_sleeves | open_jacket | upper_body | white_shirt | blunt_bangs | blush | full_body | red_footwear | shiny_hair | sneakers | standing | school_uniform | collared_shirt | blazer | striped_necktie | pleated_skirt | plaid_skirt | grey_jacket | green_necktie | red_necktie | bare_shoulders | arm_belt | detached_collar | signature | skirt | striped | upper_teeth_only | frilled_skirt | black_shirt | chain | garter_straps | layered_skirt | grin | jewelry | miniskirt | striped_thighhighs | v_over_eye | solo_focus | navel | nipples | 1boy | completely_nude | hetero | mosaic_censoring | pussy | small_breasts | penis | spread_legs | 2girls | medium_breasts | sex | sitting | sweat | vaginal | one_eye_closed | smile | ;d | bare_arms | frilled_bikini | groin | cowboy_shot | multiple_girls | stomach | striped_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------|:--------|:------------------------|:--------------|:--------------------|:--------|:-------|:-------------|:---------|:----------------|:-----|:-------------|:------------|:---------|:---------|:-------------------|:---------|:------------|:----------|:---------|:-----------|:-----------|:--------------|:----------------|:--------------------|:-------------------|:----------------|:---------------|:-------------|:---------------|:--------------|:-------------|:--------------|:--------------|:--------|:------------|:---------------|:-------------|:-----------|:-----------|:-----------------|:-----------------|:---------|:------------------|:----------------|:--------------|:--------------|:----------------|:--------------|:-----------------|:-----------|:------------------|:------------|:--------|:----------|:-------------------|:----------------|:--------------|:--------|:----------------|:----------------|:-------|:----------|:------------|:---------------------|:-------------|:-------------|:--------|:----------|:-------|:------------------|:---------|:-------------------|:--------|:----------------|:--------|:--------------|:---------|:-----------------|:------|:----------|:--------|:----------|:-----------------|:--------|:-----|:------------|:-----------------|:--------|:--------------|:-----------------|:----------|:-----------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | | | | | X | X | X | X | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 22 |  |  |  |  |  | | | | | X | X | X | X | | | X | | | | | | | X | | | | X | | | X | X | | | | X | | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | | | | | X | X | X | X | | | X | | | | | X | | | | | | | | | X | X | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | | | | | X | X | X | X | | | X | | | | | X | | X | | | | X | | | X | X | | | | X | | | | X | | | | | | X | | | | | | X | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | | | | | X | X | | X | | | X | | | | | | | | | | | | | | | | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | | | | | X | X | | X | | | | | | | | | | | | | | | | | X | | | | X | | | | | | X | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/udagawa_ako_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T11:33:51+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T17:42:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of udagawa\_ako/宇田川あこ (BanG Dream!)
===========================================
This is the dataset of udagawa\_ako/宇田川あこ (BanG Dream!), containing 237 images and their tags.
The core tags of this character are 'purple\_hair, bangs, red\_eyes, twintails, sidelocks, long\_hair, fang, v-shaped\_eyebrows, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
d2466d5e309036740e4b084947621c1717a68f0f
|
This dataset is a subset of the [JosephusCheung/GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset/viewer/default/train?p=11736) dataset, where only german samples were selected as well as formated with the following template for the chat models:
```<s>[INST] User prompt [/INST] Model answer </s>```
|
tessiw/German_GuanacoDataset
|
[
"task_categories:conversational",
"language:de",
"region:us"
] |
2023-09-26T11:34:29+00:00
|
{"language": ["de"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77973314, "num_examples": 139476}], "download_size": 40038214, "dataset_size": 77973314}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T11:52:18+00:00
|
[] |
[
"de"
] |
TAGS
#task_categories-conversational #language-German #region-us
|
This dataset is a subset of the JosephusCheung/GuanacoDataset dataset, where only german samples were selected as well as formated with the following template for the chat models:
|
[] |
[
"TAGS\n#task_categories-conversational #language-German #region-us \n"
] |
[
20
] |
[
"passage: TAGS\n#task_categories-conversational #language-German #region-us \n"
] |
d88d5c4c71fd5ed749bd9f2746384b4f109295e0
|
# CONSTIT
It includes the references of all the decisions of the [Conseil constitutionnel](https://www.data.gouv.fr/fr/datasets/constit-les-decisions-du-conseil-constitutionnel/) since its creation in 1958 and these same decisions in full text according to the following table:
Contentious standards
Constitutional decisions ( DC) since the beginning (1958), Question prioritaire de constitutionnalité ( QPC) since the beginning (2010), Control of laws ( LP) of the country (New Caledonia and French Polynesia) since the beginning (1958), Control of overseas laws (LOM) since the beginning (2007), declassification of texts (L) since the beginning (1958), fins de non recevoir (FNR) since the beginning (1958),
Electoral and related disputes
AN since 1993, Sénat since 1993, Présidentielle since 1993, Référendum since 1993, Déchéance (D) since 1985, Incompatibilités (I) since origin (1958)
Other
Appointments (members, deputy rapporteurs, general secretaries), organization, other decisions ... since 1997
Article 16 since inception (1958)
All decisions of the Constitutional Council are published in the Official Journal of the French Republic and in the annual compendium of decisions of the Constitutional Council.
|
Nicolas-BZRD/CONSTIT_opendata
|
[
"size_categories:1K<n<10K",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] |
2023-09-26T11:38:21+00:00
|
{"language": ["fr"], "license": "odc-by", "size_categories": ["1K<n<10K"], "pretty_name": "Conseil constitutionnel", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73352759, "num_examples": 7097}], "download_size": 27796119, "dataset_size": 73352759}, "tags": ["legal"]}
|
2023-09-28T08:49:22+00:00
|
[] |
[
"fr"
] |
TAGS
#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us
|
# CONSTIT
It includes the references of all the decisions of the Conseil constitutionnel since its creation in 1958 and these same decisions in full text according to the following table:
Contentious standards
Constitutional decisions ( DC) since the beginning (1958), Question prioritaire de constitutionnalité ( QPC) since the beginning (2010), Control of laws ( LP) of the country (New Caledonia and French Polynesia) since the beginning (1958), Control of overseas laws (LOM) since the beginning (2007), declassification of texts (L) since the beginning (1958), fins de non recevoir (FNR) since the beginning (1958),
Electoral and related disputes
AN since 1993, Sénat since 1993, Présidentielle since 1993, Référendum since 1993, Déchéance (D) since 1985, Incompatibilités (I) since origin (1958)
Other
Appointments (members, deputy rapporteurs, general secretaries), organization, other decisions ... since 1997
Article 16 since inception (1958)
All decisions of the Constitutional Council are published in the Official Journal of the French Republic and in the annual compendium of decisions of the Constitutional Council.
|
[
"# CONSTIT\n\nIt includes the references of all the decisions of the Conseil constitutionnel since its creation in 1958 and these same decisions in full text according to the following table:\n\nContentious standards \nConstitutional decisions ( DC) since the beginning (1958), Question prioritaire de constitutionnalité ( QPC) since the beginning (2010), Control of laws ( LP) of the country (New Caledonia and French Polynesia) since the beginning (1958), Control of overseas laws (LOM) since the beginning (2007), declassification of texts (L) since the beginning (1958), fins de non recevoir (FNR) since the beginning (1958),\n\nElectoral and related disputes \nAN since 1993, Sénat since 1993, Présidentielle since 1993, Référendum since 1993, Déchéance (D) since 1985, Incompatibilités (I) since origin (1958)\n\nOther \nAppointments (members, deputy rapporteurs, general secretaries), organization, other decisions ... since 1997\nArticle 16 since inception (1958)\n\nAll decisions of the Constitutional Council are published in the Official Journal of the French Republic and in the annual compendium of decisions of the Constitutional Council."
] |
[
"TAGS\n#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us \n",
"# CONSTIT\n\nIt includes the references of all the decisions of the Conseil constitutionnel since its creation in 1958 and these same decisions in full text according to the following table:\n\nContentious standards \nConstitutional decisions ( DC) since the beginning (1958), Question prioritaire de constitutionnalité ( QPC) since the beginning (2010), Control of laws ( LP) of the country (New Caledonia and French Polynesia) since the beginning (1958), Control of overseas laws (LOM) since the beginning (2007), declassification of texts (L) since the beginning (1958), fins de non recevoir (FNR) since the beginning (1958),\n\nElectoral and related disputes \nAN since 1993, Sénat since 1993, Présidentielle since 1993, Référendum since 1993, Déchéance (D) since 1985, Incompatibilités (I) since origin (1958)\n\nOther \nAppointments (members, deputy rapporteurs, general secretaries), organization, other decisions ... since 1997\nArticle 16 since inception (1958)\n\nAll decisions of the Constitutional Council are published in the Official Journal of the French Republic and in the annual compendium of decisions of the Constitutional Council."
] |
[
34,
249
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #language-French #license-odc-by #legal #region-us \n# CONSTIT\n\nIt includes the references of all the decisions of the Conseil constitutionnel since its creation in 1958 and these same decisions in full text according to the following table:\n\nContentious standards \nConstitutional decisions ( DC) since the beginning (1958), Question prioritaire de constitutionnalité ( QPC) since the beginning (2010), Control of laws ( LP) of the country (New Caledonia and French Polynesia) since the beginning (1958), Control of overseas laws (LOM) since the beginning (2007), declassification of texts (L) since the beginning (1958), fins de non recevoir (FNR) since the beginning (1958),\n\nElectoral and related disputes \nAN since 1993, Sénat since 1993, Présidentielle since 1993, Référendum since 1993, Déchéance (D) since 1985, Incompatibilités (I) since origin (1958)\n\nOther \nAppointments (members, deputy rapporteurs, general secretaries), organization, other decisions ... since 1997\nArticle 16 since inception (1958)\n\nAll decisions of the Constitutional Council are published in the Official Journal of the French Republic and in the annual compendium of decisions of the Constitutional Council."
] |
a39f02cfea870793b0e99c44036cef44543a169a
|
# Bangumi Image Base of Encouragement Of Climb
This is the image base of bangumi Encouragement of Climb, we detected 20 characters, 3066 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 30 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 14 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 56 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 25 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 467 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 30 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 16 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 86 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 32 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 17 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 15 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 1010 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 66 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 339 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 47 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 377 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 36 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 16 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 6 | [Download](18/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 381 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/encouragementofclimb
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-26T11:47:09+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T11:18:25+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Encouragement Of Climb
============================================
This is the image base of bangumi Encouragement of Climb, we detected 20 characters, 3066 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
51169c9366fb50411cbaf240fa2b6a4a57521606
|
# Dataset of wakamiya_eve (BanG Dream!)
This is the dataset of wakamiya_eve (BanG Dream!), containing 192 images and their tags.
The core tags of this character are `blue_eyes, bangs, white_hair, long_hair, braid, twin_braids, ribbon, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 192 | 239.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakamiya_eve_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 192 | 153.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakamiya_eve_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 461 | 313.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakamiya_eve_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 192 | 215.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakamiya_eve_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 461 | 424.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wakamiya_eve_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/wakamiya_eve_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, blush, open_mouth, solo, :d, hair_flower, looking_at_viewer, dress, frills, holding, jewelry |
| 1 | 8 |  |  |  |  |  | hair_ribbon, 1girl, :d, blush, looking_at_viewer, open_mouth, solo, white_ribbon, upper_body, collarbone, purple_bow, purple_choker, bare_shoulders, short_sleeves, twintails, white_dress, aqua_eyes, frilled_dress, frilled_sleeves, simple_background, white_background, white_choker |
| 2 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, :d, open_mouth, upper_body, collarbone, serafuku, short_sleeves, white_shirt, neckerchief, simple_background, white_sailor_collar |
| 3 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, white_background, blush, closed_mouth, long_sleeves, sweater, upper_body, smile |
| 4 | 6 |  |  |  |  |  | 1girl, solo, blush, double-breasted, hanasakigawa_school_uniform, long_sleeves, looking_at_viewer, neck_ribbon, red_ribbon, sailor_dress, white_background, brown_dress, pleated_dress, simple_background, smile, cowboy_shot, grey_hair, holding, open_mouth, standing, white_sailor_collar |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | solo | :d | hair_flower | looking_at_viewer | dress | frills | holding | jewelry | hair_ribbon | white_ribbon | upper_body | collarbone | purple_bow | purple_choker | bare_shoulders | short_sleeves | twintails | white_dress | aqua_eyes | frilled_dress | frilled_sleeves | simple_background | white_background | white_choker | serafuku | white_shirt | neckerchief | white_sailor_collar | closed_mouth | long_sleeves | sweater | smile | double-breasted | hanasakigawa_school_uniform | neck_ribbon | red_ribbon | sailor_dress | brown_dress | pleated_dress | cowboy_shot | grey_hair | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:-----|:--------------|:--------------------|:--------|:---------|:----------|:----------|:--------------|:---------------|:-------------|:-------------|:-------------|:----------------|:-----------------|:----------------|:------------|:--------------|:------------|:----------------|:------------------|:--------------------|:-------------------|:---------------|:-----------|:--------------|:--------------|:----------------------|:---------------|:---------------|:----------|:--------|:------------------|:------------------------------|:--------------|:-------------|:---------------|:--------------|:----------------|:--------------|:------------|:-----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | | X | | | | | | | X | X | | | | X | | | | | | X | | | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | | X | | | X | | | | | | | X | | | | | | | | | | | X | X | | | | | | X | X | X | X | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | X | X | | | X | | | X | | | | | | | | | | | | | | | X | X | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/wakamiya_eve_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T11:54:24+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T19:19:23+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of wakamiya\_eve (BanG Dream!)
======================================
This is the dataset of wakamiya\_eve (BanG Dream!), containing 192 images and their tags.
The core tags of this character are 'blue\_eyes, bangs, white\_hair, long\_hair, braid, twin\_braids, ribbon, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
97ff4f23ebd91da6aa49917cc32fcae9acb0d7c5
|
# Dataset of Kagayama Kaede
This is the dataset of Kagayama Kaede, containing 176 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 176 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 445 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 510 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 176 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 176 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 176 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 445 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 445 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 375 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 510 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 510 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/kagayama_kaede_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T11:56:57+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T19:45:39+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Kagayama Kaede
=========================
This is the dataset of Kagayama Kaede, containing 176 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
416f594a253e2f9616c6550673a13f278e4e6e67
|
# Coreference Resolution in Question Answering (CRaQAn)
250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.
## Generation Process
Given the relative complexity of our task (coreference resolution across passages for question-answering), we aimed
to avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer
pair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To
accomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found [here](https://huggingface.co/datasets/Edge-Pyxos/CRaQAn_v1/tree/main/generation_demo/prompts).
## Review Process
Every question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a
high-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the
following criteria. For each QA pair:
1. The question is clear and not ambiguous with regards to the text.
2. The question is a single question, and not two separate or related questions joined by the word "and".
3. The question does not contain or assume any information outside of the required sentences.
4. The answer is correct and reasonably terse.
5. The question-answer pair must not rely on any information from outside the required sentences.
6. The question-answer pair relies on information from each of the required sentences.
7. The number of required sentences is 2 or 3.
8. The Markdown is correctly formatted.
## CRaQAn Usage
```python
from datasets import load_dataset
import pandas as pd
from IPython.display import display, Markdown
# Load dataset.
craqan = load_dataset("Edge-Pyxos/CRaQAn_v1", split = "train")
df = pd.DataFrame(craqan)
# Fix issue with section_sentences that happens during Huggingface conversion.
df["section_sentences"] = df["section_sentences"].apply(json.loads)
# Visualize a sample from the dataset.
row = df.sample(1).squeeze()
sentences = ""
for idx, s in enumerate(row.section_sentences):
sentences += (" <mark> " + s["sentence"] + " </mark> ") if idx in row.sentences_required else " " + s["sentence"]
display(Markdown(f"# Article: {row.title}"))
display(Markdown(row.article_titles[row.section_index]))
display(Markdown(f"*Required Sentences: {row.sentences_required}*"))
display(Markdown(sentences))
display(Markdown(f"**Question**: " + row.question))
display(Markdown("**Answer**: " + row.answer))
display(Markdown("-------------------"))
```
## Demo Usage
We provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our [demo notebook](https://huggingface.co/datasets/Edge-Pyxos/CRaQAn_v1/blob/main/generation_demo/create_dataset.ipynb).
|
Edge-Pyxos/CRaQAn_v1
|
[
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"legal",
"region:us"
] |
2023-09-26T12:11:53+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "craqan_v1", "tags": ["legal"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "article", "dtype": "string"}, {"name": "article_titles", "sequence": "string"}, {"name": "article_sections", "sequence": "string"}, {"name": "section", "dtype": "string"}, {"name": "section_index", "dtype": "int64"}, {"name": "section_sentences", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "sentences_required", "sequence": "int64"}, {"name": "url", "dtype": "string"}, {"name": "time_downloaded", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17788270, "num_examples": 263}], "download_size": 0, "dataset_size": 17788270}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T15:11:40+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-question-answering #size_categories-n<1K #language-English #license-cc-by-4.0 #legal #region-us
|
# Coreference Resolution in Question Answering (CRaQAn)
250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.
## Generation Process
Given the relative complexity of our task (coreference resolution across passages for question-answering), we aimed
to avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer
pair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To
accomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found here.
## Review Process
Every question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a
high-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the
following criteria. For each QA pair:
1. The question is clear and not ambiguous with regards to the text.
2. The question is a single question, and not two separate or related questions joined by the word "and".
3. The question does not contain or assume any information outside of the required sentences.
4. The answer is correct and reasonably terse.
5. The question-answer pair must not rely on any information from outside the required sentences.
6. The question-answer pair relies on information from each of the required sentences.
7. The number of required sentences is 2 or 3.
8. The Markdown is correctly formatted.
## CRaQAn Usage
## Demo Usage
We provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our demo notebook.
|
[
"# Coreference Resolution in Question Answering (CRaQAn)\n\n250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.",
"## Generation Process\n\nGiven the relative complexity of our task (coreference resolution across passages for question-answering), we aimed \nto avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer\npair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To \naccomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found here.",
"## Review Process\n\nEvery question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a\nhigh-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the\nfollowing criteria. For each QA pair:\n\n1. The question is clear and not ambiguous with regards to the text.\n2. The question is a single question, and not two separate or related questions joined by the word \"and\".\n3. The question does not contain or assume any information outside of the required sentences.\n4. The answer is correct and reasonably terse.\n5. The question-answer pair must not rely on any information from outside the required sentences.\n6. The question-answer pair relies on information from each of the required sentences.\n7. The number of required sentences is 2 or 3.\n8. The Markdown is correctly formatted.",
"## CRaQAn Usage",
"## Demo Usage\n\nWe provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our demo notebook."
] |
[
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-cc-by-4.0 #legal #region-us \n",
"# Coreference Resolution in Question Answering (CRaQAn)\n\n250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.",
"## Generation Process\n\nGiven the relative complexity of our task (coreference resolution across passages for question-answering), we aimed \nto avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer\npair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To \naccomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found here.",
"## Review Process\n\nEvery question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a\nhigh-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the\nfollowing criteria. For each QA pair:\n\n1. The question is clear and not ambiguous with regards to the text.\n2. The question is a single question, and not two separate or related questions joined by the word \"and\".\n3. The question does not contain or assume any information outside of the required sentences.\n4. The answer is correct and reasonably terse.\n5. The question-answer pair must not rely on any information from outside the required sentences.\n6. The question-answer pair relies on information from each of the required sentences.\n7. The number of required sentences is 2 or 3.\n8. The Markdown is correctly formatted.",
"## CRaQAn Usage",
"## Demo Usage\n\nWe provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our demo notebook."
] |
[
43,
37,
113,
199,
7,
33
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-cc-by-4.0 #legal #region-us \n# Coreference Resolution in Question Answering (CRaQAn)\n\n250+ question-answer pairs that require coreference resolution across sentences from selected Wikipedia passages.## Generation Process\n\nGiven the relative complexity of our task (coreference resolution across passages for question-answering), we aimed \nto avoid crowd-sourcing this dataset and instead focused on using LLMs to automate our process. Every question-answer\npair in the CRaQAn dataset was automatically generated using a Recursive Criticism and Improvement (RCI) loop. To \naccomplish our RCI loop, we wrote a GENERATOR prompt and several REVIEWER prompts, which can be found here.## Review Process\n\nEvery question-answer pair in the CRaQAn v1 dataset was reviewed by at least two human reviewers. We intend for this to be a\nhigh-trust and high-quality dataset that can be used for various applications. Every human reviewer was given the\nfollowing criteria. For each QA pair:\n\n1. The question is clear and not ambiguous with regards to the text.\n2. The question is a single question, and not two separate or related questions joined by the word \"and\".\n3. The question does not contain or assume any information outside of the required sentences.\n4. The answer is correct and reasonably terse.\n5. The question-answer pair must not rely on any information from outside the required sentences.\n6. The question-answer pair relies on information from each of the required sentences.\n7. The number of required sentences is 2 or 3.\n8. The Markdown is correctly formatted.## CRaQAn Usage## Demo Usage\n\nWe provide all prompts, code, and processes used to generate the CRaQAn-v1 dataset in our demo notebook."
] |
3c3271c66525dbb8e32e62d6ba098bf122517ae2
|
# Dataset Card for "Market_Mail_Synthetic_DataSet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitraheja82/Market_Mail_Synthetic_DataSet
|
[
"region:us"
] |
2023-09-26T12:19:33+00:00
|
{"dataset_info": {"features": [{"name": "product", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "marketing_email", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21260, "num_examples": 10}], "download_size": 25244, "dataset_size": 21260}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T12:19:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Market_Mail_Synthetic_DataSet"
More Information needed
|
[
"# Dataset Card for \"Market_Mail_Synthetic_DataSet\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Market_Mail_Synthetic_DataSet\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Market_Mail_Synthetic_DataSet\"\n\nMore Information needed"
] |
88e81a61c231f5c510dab51cb23f25e5171b1569
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
anilbhatt1/emlo2s5-sample-flagging-HF-dataset
|
[
"region:us"
] |
2023-09-26T12:22:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
|
2023-09-26T12:28:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
fbdcde385d303bd9c79a45e359a36c87bd5acf14
|
# Dataset Card for CZI DRSM
## Dataset Description
- **Homepage:** https://github.com/chanzuckerberg/DRSM-corpus
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis:
Text that describes (i) symptoms, signs, or ‘phenotype’ of a disease;
(ii) the effects of the disease on patient organs, tissues, or cells;
(iii)) the results of clinical tests that reveal pathology (including
biomarkers); (iv) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic:
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism:
- Patient-Based Therapeutics:
Text describing (i) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (ii) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (iii) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- [-1] - the paper is not a primary experimental study in rare disease
- [0] - the study does not directly investigate quality of life
- [1] - the study investigates qol but not as its primary contribution
- [2] - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
- [-1] - the paper is not a primary experimental study in rare disease
- [0] - the study is not directly investigating the natural history of a disease
- [1] - the study includes some elements a natural history but not as its primary contribution
- [2] - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied.
This data was compiled through the use of a gamified curation approach based on CentaurLabs' 'diagnos.us' platform.
## Citation Information
```
# N/A
```
|
bigbio/czi_drsm
|
[
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-09-26T12:22:47+00:00
|
{"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "CZI DRSM", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://github.com/chanzuckerberg/DRSM-corpus", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TXTCLASS"]}
|
2023-12-06T17:11:15+00:00
|
[] |
[
"en"
] |
TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for CZI DRSM
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
Research Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets:
(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers);
- Clinical Characteristics, Disease Pathology, and Diagnosis:
Text that describes (i) symptoms, signs, or ‘phenotype’ of a disease;
(ii) the effects of the disease on patient organs, tissues, or cells;
(iii)) the results of clinical tests that reveal pathology (including
biomarkers); (iv) research that use this information to figure out
a diagnosis.
- Therapeutics in the clinic:
Text describing how treatments work in the clinic (but not in a clinical trial).
- Disease mechanism:
- Patient-Based Therapeutics:
Text describing (i) Clinical trials (studies of therapeutic measures being
used on patients in a clinical trial); (ii) Post Marketing Drug Surveillance
(effects of a drug after approval in the general population or as part of
‘standard healthcare’); (iii) Drug repurposing (how a drug that has been
approved for one use is being applied to a new disease).
(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers);
- [-1] - the paper is not a primary experimental study in rare disease
- [0] - the study does not directly investigate quality of life
- [1] - the study investigates qol but not as its primary contribution
- [2] - the study's primary contribution centers on quality of life measures
(C) identifies if a paper is a natural history study (~10k papers).
- [-1] - the paper is not a primary experimental study in rare disease
- [0] - the study is not directly investigating the natural history of a disease
- [1] - the study includes some elements a natural history but not as its primary contribution
- [2] - the study's primary contribution centers on observing the time course of a rare disease
These classifications are particularly relevant in rare disease research, a field that is generally understudied.
This data was compiled through the use of a gamified curation approach based on CentaurLabs' 'URL' platform.
|
[
"# Dataset Card for CZI DRSM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\nResearch Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets: \n\n(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers); \n\n - Clinical Characteristics, Disease Pathology, and Diagnosis:\n Text that describes (i) symptoms, signs, or ‘phenotype’ of a disease; \n (ii) the effects of the disease on patient organs, tissues, or cells; \n (iii)) the results of clinical tests that reveal pathology (including\n biomarkers); (iv) research that use this information to figure out\n a diagnosis.\n\n - Therapeutics in the clinic: \n Text describing how treatments work in the clinic (but not in a clinical trial).\n\n - Disease mechanism: \n\n - Patient-Based Therapeutics: \n Text describing (i) Clinical trials (studies of therapeutic measures being \n used on patients in a clinical trial); (ii) Post Marketing Drug Surveillance \n (effects of a drug after approval in the general population or as part of \n ‘standard healthcare’); (iii) Drug repurposing (how a drug that has been \n approved for one use is being applied to a new disease).\n\n(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers); \n\n - [-1] - the paper is not a primary experimental study in rare disease\n\n - [0] - the study does not directly investigate quality of life\n\n - [1] - the study investigates qol but not as its primary contribution\n\n - [2] - the study's primary contribution centers on quality of life measures\n\n(C) identifies if a paper is a natural history study (~10k papers). \n\n - [-1] - the paper is not a primary experimental study in rare disease\n\n - [0] - the study is not directly investigating the natural history of a disease\n\n - [1] - the study includes some elements a natural history but not as its primary contribution\n\n - [2] - the study's primary contribution centers on observing the time course of a rare disease\n \nThese classifications are particularly relevant in rare disease research, a field that is generally understudied. \n\nThis data was compiled through the use of a gamified curation approach based on CentaurLabs' 'URL' platform."
] |
[
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for CZI DRSM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\nResearch Article document classification dataset based on aspects of disease research. Currently, the dataset consists of three subsets: \n\n(A) classifies title/abstracts of papers into most popular subtypes of clinical, basic, and translational papers (~20k papers); \n\n - Clinical Characteristics, Disease Pathology, and Diagnosis:\n Text that describes (i) symptoms, signs, or ‘phenotype’ of a disease; \n (ii) the effects of the disease on patient organs, tissues, or cells; \n (iii)) the results of clinical tests that reveal pathology (including\n biomarkers); (iv) research that use this information to figure out\n a diagnosis.\n\n - Therapeutics in the clinic: \n Text describing how treatments work in the clinic (but not in a clinical trial).\n\n - Disease mechanism: \n\n - Patient-Based Therapeutics: \n Text describing (i) Clinical trials (studies of therapeutic measures being \n used on patients in a clinical trial); (ii) Post Marketing Drug Surveillance \n (effects of a drug after approval in the general population or as part of \n ‘standard healthcare’); (iii) Drug repurposing (how a drug that has been \n approved for one use is being applied to a new disease).\n\n(B) identifies whether a title/abstract of a paper describes substantive research into Quality of Life (~10k papers); \n\n - [-1] - the paper is not a primary experimental study in rare disease\n\n - [0] - the study does not directly investigate quality of life\n\n - [1] - the study investigates qol but not as its primary contribution\n\n - [2] - the study's primary contribution centers on quality of life measures\n\n(C) identifies if a paper is a natural history study (~10k papers). \n\n - [-1] - the paper is not a primary experimental study in rare disease\n\n - [0] - the study is not directly investigating the natural history of a disease\n\n - [1] - the study includes some elements a natural history but not as its primary contribution\n\n - [2] - the study's primary contribution centers on observing the time course of a rare disease\n \nThese classifications are particularly relevant in rare disease research, a field that is generally understudied. \n\nThis data was compiled through the use of a gamified curation approach based on CentaurLabs' 'URL' platform."
] |
[
26,
9,
554
] |
[
"passage: TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n# Dataset Card for CZI DRSM"
] |
70e211010d9084c188408bac231486f37574c5a7
|
# Dataset of Miyauchi Kazuho
This is the dataset of Miyauchi Kazuho, containing 172 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 172 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 411 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 427 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 172 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 172 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 172 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 411 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 411 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 332 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 427 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 427 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
CyberHarem/miyauchi_kazuho_nonnonbiyori
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T12:25:06+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2023-09-27T20:10:15+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of Miyauchi Kazuho
==========================
This is the dataset of Miyauchi Kazuho, containing 172 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
|
[] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
[
44
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a37a6bd4aeba21934eebab8d9e87ae8e94632572
|
# Dataset Card for "nsql-eng"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ThingsSolver/nsql-eng
|
[
"region:us"
] |
2023-09-26T12:28:45+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "is_english", "dtype": "bool"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 911778978, "num_examples": 261423}], "download_size": 226661607, "dataset_size": 911778978}}
|
2023-09-28T06:39:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "nsql-eng"
More Information needed
|
[
"# Dataset Card for \"nsql-eng\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"nsql-eng\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"nsql-eng\"\n\nMore Information needed"
] |
d742e3025346e7cfc517657c4add8b7fc6c05d28
|
# Dataset Card for "alpaca-data-gpt4-chinese-zhtw"
This dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This dataset is a translation from English to Chinese.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
|
erhwenkuo/alpaca-data-gpt4-chinese-zhtw
|
[
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"gpt4",
"alpaca",
"instruction-finetuning",
"arxiv:2304.03277",
"region:us"
] |
2023-09-26T12:42:02+00:00
|
{"language": ["zh"], "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "question-answering"], "pretty_name": " alpaca-data-gpt4-chinese-zhtw", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33817106, "num_examples": 52049}], "download_size": 22275874, "dataset_size": 33817106}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["gpt4", "alpaca", "instruction-finetuning"]}
|
2023-09-26T13:03:00+00:00
|
[
"2304.03277"
] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-Chinese #gpt4 #alpaca #instruction-finetuning #arxiv-2304.03277 #region-us
|
# Dataset Card for "alpaca-data-gpt4-chinese-zhtw"
This dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: URL This dataset is a translation from English to Chinese.
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.
- 'input': 'str', optional context or input for the task.
- 'output': 'str', the answer to the instruction as generated by 'GPT-4'.
- 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
#### Same example from original Alpaca:
## Licensing Information
The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
|
[
"# Dataset Card for \"alpaca-data-gpt4-chinese-zhtw\"\n\nThis dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This dataset is a translation from English to Chinese.",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.\n - 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT4:",
"#### Same example from original Alpaca:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-Chinese #gpt4 #alpaca #instruction-finetuning #arxiv-2304.03277 #region-us \n",
"# Dataset Card for \"alpaca-data-gpt4-chinese-zhtw\"\n\nThis dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This dataset is a translation from English to Chinese.",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.\n - 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT4:",
"#### Same example from original Alpaca:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
[
78,
82,
18,
159,
73,
12,
9,
25
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-Chinese #gpt4 #alpaca #instruction-finetuning #arxiv-2304.03277 #region-us \n# Dataset Card for \"alpaca-data-gpt4-chinese-zhtw\"\n\nThis dataset contains Chinese (zh-tw) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.\n\nThe dataset was originaly shared in this repository: URL This dataset is a translation from English to Chinese.## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.\n - 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:#### Example from Alpaca-GPT4:#### Same example from original Alpaca:## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
d986520216b5c25de1edbe60fdf5362be27bd1f1
|
# Dataset of seta_kaoru/瀬田薫/세타카오루 (BanG Dream!)
This is the dataset of seta_kaoru/瀬田薫/세타카오루 (BanG Dream!), containing 239 images and their tags.
The core tags of this character are `purple_hair, red_eyes, bangs, long_hair, ponytail, hair_between_eyes, sidelocks`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 239 | 216.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/seta_kaoru_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 239 | 158.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/seta_kaoru_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 467 | 290.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/seta_kaoru_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 239 | 201.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/seta_kaoru_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 467 | 359.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/seta_kaoru_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/seta_kaoru_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | black_vest, collared_shirt, long_sleeves, smile, white_shirt, simple_background, 1girl, solo, white_background, looking_at_viewer, necklace, black_pants, upper_body, open_mouth, pinstripe_pattern, vertical_stripes |
| 1 | 14 |  |  |  |  |  | 1girl, school_uniform, solo, blazer, grey_jacket, long_sleeves, collared_shirt, looking_at_viewer, striped_necktie, :d, open_mouth, white_background, white_shirt, plaid_skirt, pleated_skirt, upper_body, miniskirt, sparkle, holding, simple_background |
| 2 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, black_headwear, hat_flower, red_rose, brooch, holding, jacket, one_eye_closed, top_hat, upper_body, ascot, cape, formal, hat_ribbon, long_sleeves, pants, purple_bowtie, suit, vest |
| 3 | 8 |  |  |  |  |  | 1girl, blush, hetero, solo_focus, 1boy, nipples, open_mouth, medium_breasts, penis, completely_nude, cum_in_pussy, sex, spread_legs, tears, vaginal, ass, girl_on_top, mosaic_censoring, open_shirt, purple_eyes, small_breasts, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | black_vest | collared_shirt | long_sleeves | smile | white_shirt | simple_background | 1girl | solo | white_background | looking_at_viewer | necklace | black_pants | upper_body | open_mouth | pinstripe_pattern | vertical_stripes | school_uniform | blazer | grey_jacket | striped_necktie | :d | plaid_skirt | pleated_skirt | miniskirt | sparkle | holding | black_headwear | hat_flower | red_rose | brooch | jacket | one_eye_closed | top_hat | ascot | cape | formal | hat_ribbon | pants | purple_bowtie | suit | vest | blush | hetero | solo_focus | 1boy | nipples | medium_breasts | penis | completely_nude | cum_in_pussy | sex | spread_legs | tears | vaginal | ass | girl_on_top | mosaic_censoring | open_shirt | purple_eyes | small_breasts | sweat |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------|:-----------------|:---------------|:--------|:--------------|:--------------------|:--------|:-------|:-------------------|:--------------------|:-----------|:--------------|:-------------|:-------------|:--------------------|:-------------------|:-----------------|:---------|:--------------|:------------------|:-----|:--------------|:----------------|:------------|:----------|:----------|:-----------------|:-------------|:-----------|:---------|:---------|:-----------------|:----------|:--------|:-------|:---------|:-------------|:--------|:----------------|:-------|:-------|:--------|:---------|:-------------|:-------|:----------|:-----------------|:--------|:------------------|:---------------|:------|:--------------|:--------|:----------|:------|:--------------|:-------------------|:-------------|:--------------|:----------------|:--------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | | X | X | | X | X | X | X | X | X | | | X | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | | | X | X | | | X | X | | X | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/seta_kaoru_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T12:57:26+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:27:55+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of seta\_kaoru/瀬田薫/세타카오루 (BanG Dream!)
==============================================
This is the dataset of seta\_kaoru/瀬田薫/세타카오루 (BanG Dream!), containing 239 images and their tags.
The core tags of this character are 'purple\_hair, red\_eyes, bangs, long\_hair, ponytail, hair\_between\_eyes, sidelocks', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
f34f8736e4704888a11eaba0e9337879a0a95a76
|
# Dataset Card for "pollution-krakow-no2-co"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vitaliy-sharandin/pollution-krakow-no2-co
|
[
"region:us"
] |
2023-09-26T12:59:04+00:00
|
{"dataset_info": {"features": [{"name": "NO2", "dtype": "float64"}, {"name": "CO", "dtype": "float64"}, {"name": "dt", "dtype": "timestamp[ns]"}], "splits": [{"name": "train", "num_bytes": 6816, "num_examples": 284}], "download_size": 9084, "dataset_size": 6816}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T13:05:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pollution-krakow-no2-co"
More Information needed
|
[
"# Dataset Card for \"pollution-krakow-no2-co\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pollution-krakow-no2-co\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pollution-krakow-no2-co\"\n\nMore Information needed"
] |
178fb0d8975d7ccf722acaf4d2b24f658b0e6422
|
# Dataset Card for "ppo-seals-Ant-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/ppo-seals-Ant-v1
|
[
"region:us"
] |
2023-09-26T13:12:32+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 141011280, "num_examples": 104}], "download_size": 41078990, "dataset_size": 141011280}}
|
2023-09-27T05:56:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ppo-seals-Ant-v1"
More Information needed
|
[
"# Dataset Card for \"ppo-seals-Ant-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ppo-seals-Ant-v1\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ppo-seals-Ant-v1\"\n\nMore Information needed"
] |
e3f352fcc27a445be7f70825f2fee20aa3b5b031
|
# Dataset of kurata_mashiro/倉田ましろ (BanG Dream!)
This is the dataset of kurata_mashiro/倉田ましろ (BanG Dream!), containing 230 images and their tags.
The core tags of this character are `bangs, blue_eyes, hair_between_eyes, short_hair, breasts, white_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 230 | 344.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurata_mashiro_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 230 | 185.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurata_mashiro_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 564 | 404.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurata_mashiro_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 230 | 298.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurata_mashiro_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 564 | 606.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kurata_mashiro_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kurata_mashiro_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_headwear, white_jacket, black_gloves, blush, long_sleeves, open_mouth, simple_background, virtual_youtuber, white_background, black_footwear, earrings, full_body, long_hair, standing, white_shirt, white_socks, :d, aqua_eyes, black_ribbon, blue_hair, boots, green_eyes, holding_microphone, mini_hat, neck_ribbon, white_skirt |
| 1 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_headwear, black_gloves, long_sleeves, white_jacket, white_shirt, tilted_headwear, white_skirt, blush, earrings, open_mouth, black_ribbon, blue_butterfly, half_gloves, outstretched_arm, buttons, mini_hat, smile |
| 2 | 7 |  |  |  |  |  | 1girl, blush, solo, earrings, looking_at_viewer, long_sleeves, white_background, black_gloves, blue_hair, jacket, open_mouth, shirt, simple_background, smile, blue_butterfly, closed_mouth, hair_ornament, mini_hat, virtual_youtuber |
| 3 | 24 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, long_sleeves, white_sailor_collar, white_background, open_mouth, simple_background, neckerchief, pleated_skirt, smile, upper_body, black_shirt, blue_serafuku |
| 4 | 6 |  |  |  |  |  | 1girl, blush, long_sleeves, looking_at_viewer, solo, blue_dress, neck_ribbon, vertical-striped_dress, blue_hair, simple_background, white_background, white_shirt, blue_ribbon, collared_shirt, open_mouth, standing |
| 5 | 7 |  |  |  |  |  | 1girl, blush, looking_at_viewer, navel, nipples, solo, collarbone, pussy, stomach, completely_nude, large_breasts, medium_hair, sweat, wet, closed_mouth, groin, shiny_skin, simple_background, smile, standing, aqua_eyes, blue_hair, cowboy_shot, grey_background, hand_up, medium_breasts, mosaic_censoring, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | white_headwear | white_jacket | black_gloves | blush | long_sleeves | open_mouth | simple_background | virtual_youtuber | white_background | black_footwear | earrings | full_body | long_hair | standing | white_shirt | white_socks | :d | aqua_eyes | black_ribbon | blue_hair | boots | green_eyes | holding_microphone | mini_hat | neck_ribbon | white_skirt | tilted_headwear | blue_butterfly | half_gloves | outstretched_arm | buttons | smile | jacket | shirt | closed_mouth | hair_ornament | white_sailor_collar | neckerchief | pleated_skirt | upper_body | black_shirt | blue_serafuku | blue_dress | vertical-striped_dress | blue_ribbon | collared_shirt | navel | nipples | collarbone | pussy | stomach | completely_nude | large_breasts | medium_hair | sweat | wet | groin | shiny_skin | cowboy_shot | grey_background | hand_up | medium_breasts | mosaic_censoring |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-----------------|:---------------|:---------------|:--------|:---------------|:-------------|:--------------------|:-------------------|:-------------------|:-----------------|:-----------|:------------|:------------|:-----------|:--------------|:--------------|:-----|:------------|:---------------|:------------|:--------|:-------------|:---------------------|:-----------|:--------------|:--------------|:------------------|:-----------------|:--------------|:-------------------|:----------|:--------|:---------|:--------|:---------------|:----------------|:----------------------|:--------------|:----------------|:-------------|:--------------|:----------------|:-------------|:-------------------------|:--------------|:-----------------|:--------|:----------|:-------------|:--------|:----------|:------------------|:----------------|:--------------|:--------|:------|:--------|:-------------|:--------------|:------------------|:----------|:-----------------|:-------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | X | | | | X | | | | X | | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | | | X | X | X | X | X | X | X | | X | | | | | | | | | X | | | | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | X | X | | | | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | X | | | | X | X | X | X | | X | | | | | X | X | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | X | | | | X | | X | X | | | | | | | X | | | | X | | X | | | | | | | | | | | | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kurata_mashiro_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T13:21:47+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T17:46:15+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kurata\_mashiro/倉田ましろ (BanG Dream!)
==============================================
This is the dataset of kurata\_mashiro/倉田ましろ (BanG Dream!), containing 230 images and their tags.
The core tags of this character are 'bangs, blue\_eyes, hair\_between\_eyes, short\_hair, breasts, white\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6b16e6d3cfc806252c233909ac8840ba44566379
|
# Dataset Card for "ppo-seals-HalfCheetah-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/ppo-seals-HalfCheetah-v1
|
[
"region:us"
] |
2023-09-26T13:41:04+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 92213656, "num_examples": 104}], "download_size": 25621245, "dataset_size": 92213656}}
|
2023-09-27T05:57:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ppo-seals-HalfCheetah-v1"
More Information needed
|
[
"# Dataset Card for \"ppo-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ppo-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ppo-seals-HalfCheetah-v1\"\n\nMore Information needed"
] |
4a2938b10384f6a517c8a85fc36a0b2eeee942c0
|
# Dataset Card for Dataset Name
## Dataset Description
### Dataset Summary
This dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments.
Each segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.
Each number represents one of six emotions:
1 - joy
2 - love
3 - surprise
4 - sadness
5 - anger
6 - fear
These numbers are then stored as a one dimensional vector of length 100 in the column emotions.
Columns:
href - link of the website address from where the script was taken. It should be prefixed with "https://imsdb.com/".
title - Title of the film
script - The whole screenplay
scenes - a list of length 100. scripts are segmented into 100 segments and stored as a list
emotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
hakkam10/screenplay_emotions
|
[
"region:us"
] |
2023-09-26T13:41:07+00:00
|
{}
|
2023-09-26T15:00:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
### Dataset Summary
This dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments.
Each segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.
Each number represents one of six emotions:
1 - joy
2 - love
3 - surprise
4 - sadness
5 - anger
6 - fear
These numbers are then stored as a one dimensional vector of length 100 in the column emotions.
Columns:
href - link of the website address from where the script was taken. It should be prefixed with "URL
title - Title of the film
script - The whole screenplay
scenes - a list of length 100. scripts are segmented into 100 segments and stored as a list
emotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments. \nEach segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.\nEach number represents one of six emotions:\n\n1 - joy\n\n2 - love\n\n3 - surprise\n\n4 - sadness\n\n5 - anger\n\n6 - fear\n\n\nThese numbers are then stored as a one dimensional vector of length 100 in the column emotions.\n\nColumns:\n\nhref - link of the website address from where the script was taken. It should be prefixed with \"URL\n\ntitle - Title of the film\n\nscript - The whole screenplay\n\nscenes - a list of length 100. scripts are segmented into 100 segments and stored as a list\n\nemotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments. \nEach segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.\nEach number represents one of six emotions:\n\n1 - joy\n\n2 - love\n\n3 - surprise\n\n4 - sadness\n\n5 - anger\n\n6 - fear\n\n\nThese numbers are then stored as a one dimensional vector of length 100 in the column emotions.\n\nColumns:\n\nhref - link of the website address from where the script was taken. It should be prefixed with \"URL\n\ntitle - Title of the film\n\nscript - The whole screenplay\n\nscenes - a list of length 100. scripts are segmented into 100 segments and stored as a list\n\nemotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
8,
4,
205,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Description### Dataset Summary\n\nThis dataset was created by scrapping the screenplays from the imsdb website and then splitting them into 100 segments. \nEach segment has been fed into a emotion classification model and classified into the emotion it evokes and represented as a number from 1 to 6.\nEach number represents one of six emotions:\n\n1 - joy\n\n2 - love\n\n3 - surprise\n\n4 - sadness\n\n5 - anger\n\n6 - fear\n\n\nThese numbers are then stored as a one dimensional vector of length 100 in the column emotions.\n\nColumns:\n\nhref - link of the website address from where the script was taken. It should be prefixed with \"URL\n\ntitle - Title of the film\n\nscript - The whole screenplay\n\nscenes - a list of length 100. scripts are segmented into 100 segments and stored as a list\n\nemotions - a list of emotions where each element corresponds to the segments of screenplay at the same position.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
780a2b0e1140a9762947f8b2566d7cd064c2e024
|
# Dataset Card for "ppo-seals-Hopper-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/ppo-seals-Hopper-v1
|
[
"region:us"
] |
2023-09-26T13:42:54+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 57153894, "num_examples": 104}], "download_size": 12420708, "dataset_size": 57153894}}
|
2023-09-27T06:06:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ppo-seals-Hopper-v1"
More Information needed
|
[
"# Dataset Card for \"ppo-seals-Hopper-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ppo-seals-Hopper-v1\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ppo-seals-Hopper-v1\"\n\nMore Information needed"
] |
6288c4971f1c03943269147986267d6c3737382e
|
# Dataset Card for "ppo-seals-Swimmer-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/ppo-seals-Swimmer-v1
|
[
"region:us"
] |
2023-09-26T13:44:14+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 131302158, "num_examples": 104}], "download_size": 23343768, "dataset_size": 131302158}}
|
2023-09-27T06:01:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ppo-seals-Swimmer-v1"
More Information needed
|
[
"# Dataset Card for \"ppo-seals-Swimmer-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ppo-seals-Swimmer-v1\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ppo-seals-Swimmer-v1\"\n\nMore Information needed"
] |
accb242e3831dbc0314a1340c38c3c3ac6cff1d7
|
# Dataset Card for "ppo-seals-Walker2d-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HumanCompatibleAI/ppo-seals-Walker2d-v1
|
[
"region:us"
] |
2023-09-26T13:45:14+00:00
|
{"dataset_info": {"features": [{"name": "obs", "sequence": {"sequence": "float64"}}, {"name": "acts", "sequence": {"sequence": "float32"}}, {"name": "infos", "sequence": "string"}, {"name": "terminal", "dtype": "bool"}, {"name": "rews", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 63405655, "num_examples": 104}], "download_size": 20942934, "dataset_size": 63405655}}
|
2023-09-27T06:09:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ppo-seals-Walker2d-v1"
More Information needed
|
[
"# Dataset Card for \"ppo-seals-Walker2d-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ppo-seals-Walker2d-v1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ppo-seals-Walker2d-v1\"\n\nMore Information needed"
] |
735d163efa01376c70d796c6627638110b0317c0
|
# Dataset Card for "squad_baseline_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_baseline_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:58:45+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45381, "num_examples": 44}, {"name": "validation", "num_bytes": 47457, "num_examples": 50}], "download_size": 43725, "dataset_size": 92838}}
|
2023-09-26T13:58:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_baseline_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_baseline_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_baseline_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_baseline_v4_train_10_eval_10\"\n\nMore Information needed"
] |
4980e6c7ec2d47beccaaf90390c3882018b53d63
|
# Dataset Card for "squad_context_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_context_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:58:51+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78251, "num_examples": 44}, {"name": "validation", "num_bytes": 80830, "num_examples": 50}], "download_size": 63029, "dataset_size": 159081}}
|
2023-09-26T13:58:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_context_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_context_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_context_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_context_v4_train_10_eval_10\"\n\nMore Information needed"
] |
fe58557f2b3e8f53ec4acaf6eb99a9385ea68762
|
# Dataset Card for "squad_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_title_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:58:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 203084, "num_examples": 138}, {"name": "validation", "num_bytes": 50807, "num_examples": 50}], "download_size": 65145, "dataset_size": 253891}}
|
2023-09-26T13:59:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_title_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
b81d9d801a26743ebf6675f840f59326aa9fb948
|
# Dataset Card for "squad_wrong_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_wrong_title_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:59:04+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 203084, "num_examples": 138}, {"name": "validation", "num_bytes": 50820, "num_examples": 50}], "download_size": 65070, "dataset_size": 253904}}
|
2023-09-26T13:59:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_wrong_title_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_wrong_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_wrong_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_wrong_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
0993d80019e2550c517d28af252f870d43312d63
|
# Dataset Card for "squad_no_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_no_title_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:59:10+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 203084, "num_examples": 138}, {"name": "validation", "num_bytes": 48707, "num_examples": 50}], "download_size": 64510, "dataset_size": 251791}}
|
2023-09-26T13:59:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_no_title_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_no_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_title_v4_train_10_eval_10\"\n\nMore Information needed"
] |
d65a7be5c468aee419627cfa754a75fa81535153
|
# Dataset Card for "squad_no_title_strict_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/squad_no_title_strict_v4_train_10_eval_10
|
[
"region:us"
] |
2023-09-26T13:59:16+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 189138.85714285713, "num_examples": 138}, {"name": "validation", "num_bytes": 48707, "num_examples": 50}], "download_size": 64219, "dataset_size": 237845.85714285713}}
|
2023-09-26T13:59:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "squad_no_title_strict_v4_train_10_eval_10"
More Information needed
|
[
"# Dataset Card for \"squad_no_title_strict_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_no_title_strict_v4_train_10_eval_10\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_no_title_strict_v4_train_10_eval_10\"\n\nMore Information needed"
] |
51b243d0575ac4b4349c6f0feb5daecf2ba46b3c
|
# Dataset Card for "viettel_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nguyenthanhdo/viettel_v3
|
[
"region:us"
] |
2023-09-26T14:02:42+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "translated", "dtype": "bool"}, {"name": "output_len", "dtype": "int64"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 172800903.0, "num_examples": 60000}], "download_size": 84019395, "dataset_size": 172800903.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T14:02:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "viettel_v3"
More Information needed
|
[
"# Dataset Card for \"viettel_v3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"viettel_v3\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"viettel_v3\"\n\nMore Information needed"
] |
1371351e51fcd274be466ebabf8ef0b6b95ffeb2
|
# Dataset Card for "db197d09"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/db197d09
|
[
"region:us"
] |
2023-09-26T14:16:30+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 170, "num_examples": 10}], "download_size": 1327, "dataset_size": 170}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T14:16:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "db197d09"
More Information needed
|
[
"# Dataset Card for \"db197d09\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"db197d09\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"db197d09\"\n\nMore Information needed"
] |
3a590d29033c305fa1593b427127e1872a2173d1
|
# Dataset Card for "3677a860"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/3677a860
|
[
"region:us"
] |
2023-09-26T14:26:21+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 240, "num_examples": 10}], "download_size": 1441, "dataset_size": 240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T14:26:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "3677a860"
More Information needed
|
[
"# Dataset Card for \"3677a860\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"3677a860\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"3677a860\"\n\nMore Information needed"
] |
b7cabcb38a34d3590eb6e06f4a67c1e78ea6f77c
|
# Dataset Card for "3b801040"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-muse256-muse512-wuerst-sdv15/3b801040
|
[
"region:us"
] |
2023-09-26T14:28:57+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 189, "num_examples": 10}], "download_size": 1374, "dataset_size": 189}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T14:28:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "3b801040"
More Information needed
|
[
"# Dataset Card for \"3b801040\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"3b801040\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"3b801040\"\n\nMore Information needed"
] |
066a4a3e06d4eb7805b210273c077fec0410b793
|
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openorca-chinese-zhtw"
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
erhwenkuo/openorca-chinese-zhtw
|
[
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:zh",
"license:mit",
"arxiv:2301.13688",
"arxiv:2306.02707",
"region:us"
] |
2023-09-26T14:36:15+00:00
|
{"language": ["zh"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": " openorca-chinese-zhtw", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6491661288, "num_examples": 4233915}], "download_size": 4106469779, "dataset_size": 6491661288}}
|
2023-09-26T21:30:01+00:00
|
[
"2301.13688",
"2306.02707"
] |
[
"zh"
] |
TAGS
#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #arxiv-2306.02707 #region-us
|
## Table of Contents
- Dataset Summary
- Dataset Attribution
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Dataset Use
- Use Cases
- Usage Caveats
- Getting Started
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openorca-chinese-zhtw"
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented FLAN Collection data.
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.
These are referenced by the official FLAN Collection repo as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
|
[
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: data/train-*\n---",
"# Dataset Card for \"openorca-chinese-zhtw\"\n\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
] |
[
"TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #arxiv-2306.02707 #region-us \n",
"## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: data/train-*\n---",
"# Dataset Card for \"openorca-chinese-zhtw\"\n\n\n<a name=\"dataset-summary\"></a>",
"# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>",
"# Languages\n\nThe language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.\n\n<a name=\"dataset-structure\"></a>",
"# Dataset Structure\n\n<a name=\"data-instances\"></a>",
"## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>",
"## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>",
"## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>",
"# Dataset Creation\n\n<a name=\"curation-rationale\"></a>",
"## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>",
"## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>",
"# Dataset Use\n\n<a name=\"use-cases\"></a>",
"## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>",
"## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>",
"## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face."
] |
[
163,
106,
29,
129,
86,
40,
19,
67,
153,
24,
18,
146,
235,
16,
46,
70,
66
] |
[
"passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-Chinese #license-mit #arxiv-2301.13688 #arxiv-2306.02707 #region-us \n## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: data/train-*\n---# Dataset Card for \"openorca-chinese-zhtw\"\n\n\n<a name=\"dataset-summary\"></a># Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>",
"passage: # Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a># Languages\n\nThe language of the origin data is primarily English and this dataset is translated by Google Translation to traditional Chinese.\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a># Dataset Creation\n\n<a name=\"curation-rationale\"></a>## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>"
] |
ad8c90107da43fdf7b90b3179fdc67916abb3d1e
|
# Dataset of yamato_maya/大和麻弥/야마토마야 (BanG Dream!)
This is the dataset of yamato_maya/大和麻弥/야마토마야 (BanG Dream!), containing 166 images and their tags.
The core tags of this character are `brown_hair, green_eyes, bangs, short_hair, breasts, glasses, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 166 | 181.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_maya_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 166 | 127.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_maya_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 373 | 250.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_maya_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 166 | 167.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_maya_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 373 | 315.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yamato_maya_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yamato_maya_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, looking_at_viewer, solo, red-framed_eyewear, collarbone, open_mouth, blush, suspenders, under-rim_eyewear, fingerless_gloves, simple_background, tank_top, white_background, bare_shoulders, hat, sleeveless, :d, jacket, white_shirt, clothes_around_waist, hair_between_eyes, large_breasts |
| 1 | 7 |  |  |  |  |  | 1girl, bare_shoulders, blush, collarbone, green_choker, solo, looking_at_viewer, green_dress, open_mouth, :d, green_background, green_bow, grey_eyes, hair_between_eyes, frills, hair_bow, hair_ribbon, strapless_dress, upper_body, white_bow, white_thighhighs |
| 2 | 6 |  |  |  |  |  | :d, blush, hair_flower, open_mouth, 1girl, green_dress, holding, choker, frilled_dress, looking_at_viewer, sleeveless_dress, solo, white_gloves, hair_between_eyes, jewelry, medium_hair, standing |
| 3 | 7 |  |  |  |  |  | 1girl, blush, school_uniform, solo, long_sleeves, looking_at_viewer, red-framed_eyewear, under-rim_eyewear, white_shirt, blazer, collared_shirt, open_mouth, pleated_skirt, striped_necktie, grey_jacket, plaid_skirt, hair_between_eyes, smile |
| 4 | 7 |  |  |  |  |  | long_sleeves, ribbed_sweater, smile, under-rim_eyewear, 1girl, looking_at_viewer, open_mouth, red-framed_eyewear, solo, turtleneck_sweater, black_sweater, blush, simple_background, upper_body, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | red-framed_eyewear | collarbone | open_mouth | blush | suspenders | under-rim_eyewear | fingerless_gloves | simple_background | tank_top | white_background | bare_shoulders | hat | sleeveless | :d | jacket | white_shirt | clothes_around_waist | hair_between_eyes | large_breasts | green_choker | green_dress | green_background | green_bow | grey_eyes | frills | hair_bow | hair_ribbon | strapless_dress | upper_body | white_bow | white_thighhighs | hair_flower | holding | choker | frilled_dress | sleeveless_dress | white_gloves | jewelry | medium_hair | standing | school_uniform | long_sleeves | blazer | collared_shirt | pleated_skirt | striped_necktie | grey_jacket | plaid_skirt | smile | ribbed_sweater | turtleneck_sweater | black_sweater |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------------|:-------------|:-------------|:--------|:-------------|:--------------------|:--------------------|:--------------------|:-----------|:-------------------|:-----------------|:------|:-------------|:-----|:---------|:--------------|:-----------------------|:--------------------|:----------------|:---------------|:--------------|:-------------------|:------------|:------------|:---------|:-----------|:--------------|:------------------|:-------------|:------------|:-------------------|:--------------|:----------|:---------|:----------------|:-------------------|:---------------|:----------|:--------------|:-----------|:-----------------|:---------------|:---------|:-----------------|:----------------|:------------------|:--------------|:--------------|:--------|:-----------------|:---------------------|:----------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | | X | X | X | | | | | | | X | | | X | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | | | X | X | | | | | | | | | | X | | | | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | | X | X | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | |
| 4 | 7 |  |  |  |  |  | X | X | X | X | | X | X | | X | | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | X | X | X | X |
|
CyberHarem/yamato_maya_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T14:36:58+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:18:25+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yamato\_maya/大和麻弥/야마토마야 (BanG Dream!)
================================================
This is the dataset of yamato\_maya/大和麻弥/야마토마야 (BanG Dream!), containing 166 images and their tags.
The core tags of this character are 'brown\_hair, green\_eyes, bangs, short\_hair, breasts, glasses, bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7acf213dc405f3afb8ede239bcc21707e4c41056
|
# Dataset Card for "VietMed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
anonymousQA/VietMedQA
|
[
"doi:10.57967/hf/1247",
"region:us"
] |
2023-09-26T14:38:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "dataset_name", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 270466008.280478, "num_examples": 224741}, {"name": "test", "num_bytes": 30052714.71952201, "num_examples": 24972}], "download_size": 129219097, "dataset_size": 300518723.0}}
|
2023-10-20T11:39:08+00:00
|
[] |
[] |
TAGS
#doi-10.57967/hf/1247 #region-us
|
# Dataset Card for "VietMed"
More Information needed
|
[
"# Dataset Card for \"VietMed\"\n\nMore Information needed"
] |
[
"TAGS\n#doi-10.57967/hf/1247 #region-us \n",
"# Dataset Card for \"VietMed\"\n\nMore Information needed"
] |
[
18,
13
] |
[
"passage: TAGS\n#doi-10.57967/hf/1247 #region-us \n# Dataset Card for \"VietMed\"\n\nMore Information needed"
] |
3e0bfd326e852fac025fd82a66d0998f6a3a821e
|
# Dataset Card for "dica_v3_283k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DigirentEnterprise/dica_v3_283k
|
[
"region:us"
] |
2023-09-26T14:41:37+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "dataset_name", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292501368, "num_examples": 284102}], "download_size": 145095136, "dataset_size": 292501368}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T14:42:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dica_v3_283k"
More Information needed
|
[
"# Dataset Card for \"dica_v3_283k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dica_v3_283k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dica_v3_283k\"\n\nMore Information needed"
] |
4882d5ec74d98d4b8d8ee3fc962de078f23b24f3
|
# Dataset Card for "map2sat-central-belt-clarity-old-map20-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mespinosami/map2sat-central-belt-clarity-old-map20-samples
|
[
"region:us"
] |
2023-09-26T14:42:52+00:00
|
{"dataset_info": {"features": [{"name": "input_image", "dtype": "image"}, {"name": "edit_prompt", "dtype": "string"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 857306.8, "num_examples": 16}, {"name": "test", "num_bytes": 201058.2, "num_examples": 4}], "download_size": 1061836, "dataset_size": 1058365.0}}
|
2023-09-26T14:42:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "map2sat-central-belt-clarity-old-map20-samples"
More Information needed
|
[
"# Dataset Card for \"map2sat-central-belt-clarity-old-map20-samples\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"map2sat-central-belt-clarity-old-map20-samples\"\n\nMore Information needed"
] |
[
6,
29
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"map2sat-central-belt-clarity-old-map20-samples\"\n\nMore Information needed"
] |
68abb13f0fdc4d88e75c1601cdbe8608a47900d3
|
# Dataset of futaba_tsukushi (BanG Dream!)
This is the dataset of futaba_tsukushi (BanG Dream!), containing 130 images and their tags.
The core tags of this character are `long_hair, bangs, black_hair, twintails, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 130 | 180.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futaba_tsukushi_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 130 | 99.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futaba_tsukushi_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 314 | 208.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futaba_tsukushi_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 130 | 157.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futaba_tsukushi_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 314 | 304.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/futaba_tsukushi_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/futaba_tsukushi_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | looking_at_viewer, black_gloves, earrings, hat, blush, 1girl, half_gloves, skirt, solo, white_headwear, long_sleeves, black_ribbon, smile, open_mouth, red_eyes, low_twintails, neck_ribbon, white_background, white_jacket, cowboy_shot, holding |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, polka_dot_dress, smile, solo, yellow_dress, belt, hair_ornament, long_sleeves, simple_background, blush, closed_mouth, crossed_arms, floating_hair, hair_bow, standing, white_background, brown_footwear, choker, collarbone, frilled_dress, frilled_socks, full_body, medium_dress, necklace, polka_dot_bow, red_footwear, shiny_hair, white_socks |
| 2 | 20 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, long_sleeves, pleated_skirt, white_sailor_collar, smile, blue_skirt, collarbone, simple_background, white_background, blue_shirt, low_twintails, upper_body, white_neckerchief, blue_serafuku, breasts, closed_mouth, open_mouth, purple_hair |
| 3 | 7 |  |  |  |  |  | 1girl, blush, small_breasts, solo, looking_at_viewer, open_mouth, collarbone, simple_background, white_background, bikini, cleavage, navel, purple_hair, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | black_gloves | earrings | hat | blush | 1girl | half_gloves | skirt | solo | white_headwear | long_sleeves | black_ribbon | smile | open_mouth | red_eyes | low_twintails | neck_ribbon | white_background | white_jacket | cowboy_shot | holding | polka_dot_dress | yellow_dress | belt | hair_ornament | simple_background | closed_mouth | crossed_arms | floating_hair | hair_bow | standing | brown_footwear | choker | collarbone | frilled_dress | frilled_socks | full_body | medium_dress | necklace | polka_dot_bow | red_footwear | shiny_hair | white_socks | pleated_skirt | white_sailor_collar | blue_skirt | blue_shirt | upper_body | white_neckerchief | blue_serafuku | breasts | purple_hair | small_breasts | bikini | cleavage | navel | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:---------------|:-----------|:------|:--------|:--------|:--------------|:--------|:-------|:-----------------|:---------------|:---------------|:--------|:-------------|:-----------|:----------------|:--------------|:-------------------|:---------------|:--------------|:----------|:------------------|:---------------|:-------|:----------------|:--------------------|:---------------|:---------------|:----------------|:-----------|:-----------|:-----------------|:---------|:-------------|:----------------|:----------------|:------------|:---------------|:-----------|:----------------|:---------------|:-------------|:--------------|:----------------|:----------------------|:-------------|:-------------|:-------------|:--------------------|:----------------|:----------|:--------------|:----------------|:---------|:-----------|:--------|:----------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | | | X | X | | | X | | X | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 20 |  |  |  |  |  | X | | | | X | X | | | X | | X | | X | X | | X | | X | | | | | | | | X | X | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | |
| 3 | 7 |  |  |  |  |  | X | | | | X | X | | | X | | | | | X | | | | X | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
CyberHarem/futaba_tsukushi_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T14:48:05+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T19:11:41+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of futaba\_tsukushi (BanG Dream!)
=========================================
This is the dataset of futaba\_tsukushi (BanG Dream!), containing 130 images and their tags.
The core tags of this character are 'long\_hair, bangs, black\_hair, twintails, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
63c2408f0be600c9222e75451d7775f679a7716f
|
# Dataset Card for "NFT-70M_transactions"
## Dataset summary
The *NFT-70M_transactions* dataset is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io), the leading trading platform in the Web3 ecosystem.
With more than 70M transactions enriched with metadata, this dataset is conceived to support a wide range of tasks, ranging from sequential and transactional data processing/analysis to graph-based modeling of the complex relationships between traders.
Besides, the availability of textual and image contents further amplifies the modeling capabilities and usage opportunities of this dataset, making it a unique and comprehensive multimodal source of information for delving into the NFT landscape.
This dataset can serve as a benchmark for various innovative and impactful tasks within the crypto landscape, such as projecting NFT prices or detecting fraudolent and wash trading activities.
Furthermore, the multimodal nature of the dataset fosters the development of classification models, as well as textual and visual generative models.
## Data anonymization
We point out that the collected NFT transactions and metadata from OpenSea are publicly distributed on blockchain.
For our purposes of re-distribution, we are also committed to ensure non-disclosure of information that might lead to identifying the NFT creators, in order to be compliant with privacy-preserving requirements and to avoid violation of data protection regulations and of property rights.
In this respect, we carried out three actions:
- Values of all variables describing non-sensitive information were kept in their original form;
- Values of all variables describing sensitive information were anonymized, in a one-way, non-revertible mode;
- URLs of image data and textual contents (i.e., NFT images and their descriptions) were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) of the image/text contents obtained via neural network models. Such embeddings are eventually provided in place of their original image and text data,
and can be found in the [**NFT-70M_image**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_image) and [**NFT-70M_text**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_text) supplementary datasets, respectively.
## Data Fields
| Variable | Type | Description | Processing | Notes |
|--------------------------|-------------|-----------------------------------------------------------------------------------------------------------|------------------|-----------------------------------|
| token_id | String | The id of the NFT — this value is unique within the same collection | Anonymized | Original values were replaced by hash-codes |
| num_sales | Integer | A progressive integer indicating the number of successful transactions involving the NFT up to the current timestamp (cf. *tx_timestamp*) | Original | Not sensitive variable |
| nft_name | Vector ID | The name of the NFT | Anonymized | Original values were encrypted via neural textual embedding |
| nft_description | Vector ID | The description of the NFT as provided by the creator | Anonymized | Original values were encrypted via neural textual embedding |
| nft_image | Vector ID | The ID for accessing the NFT image vector | Anonymized | Original values were encrypted via neural visual embedding |
| collection_name | Vector ID | The ID for accessing the Collection name vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_description | Vector ID | The ID for accessing the Collection description vector | Anonymized | Original values were encrypted via neural textual embedding |
| collection_image | Vector ID | The ID for accessing the Collection image vector | Anonymized | Original values were encrypted via neural visual embedding |
| fees_seller | Float | The absolute amount of fees the seller has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_opensea | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in *token* | Original | Not sensitive variable |
| fees_seller_usd | Float | The absolute amount of fees the seller has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| fees_opensea_usd | Float | The absolute amount of fees OpenSea has gained from this transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| payout_collection_address| String | The wallet address where seller fees are deposited | Anonymized | Original values were replaced by hash-codes |
| tx_timestamp | String | Timestamp of the transaction expressed in yyyy-mm-ddTHH:MM:SS | Original | Not sensitive variable |
| price | Float | The price of the transaction expressed in token | Original | Not sensitive variable |
| gain | Float | The gain after fees (i.e., gain = price - fees_opensea * price - fees_seller * price) | Original | Not sensitive variable |
| usd_price | Float | The price of the transaction expressed in US dollars (USD) | Original | Not sensitive variable |
| usd_gain | Float | The difference between the price and the fees expressed in US dollars (USD) | Original | Not sensitive variable |
| token | Categorical | The token type used to pay the transaction | Original | Not sensitive variable |
| to_eth | Float | The conversion rate to convert tokens into Ethereum at the current timestamp, such that eth = price * to_eth | Original | Not sensitive variable |
| to_usd | Float | The conversion rate to convert tokens into US dollars (USD) at the current timestamp, such that usd = price * to_usd | Original | Not sensitive variable |
| from_account | String | The address that sends the payment (i.e., winner/buyer) | Anonymized | Original values were replaced by hash-codes |
| to_account | String | The address that receives the payment (it often corresponds to the contract linked to the asset) | Anonymized | Original values were replaced by hash-codes |
| seller_account | String | The address of the NFT seller | Anonymized | Original values were replaced by hash-codes |
| winner_account | String | The address of the NFT buyer | Anonymized | Original values were replaced by hash-codes |
| contract_address | String | The contract address on the blockchain | Anonymized | Original values were replaced by hash-codes |
| created_date | Timestamp | The date of creation of the contract | Original | Not sensitive variable |
| chain | Categorical | The blockchain where the transaction occurs | Original | Not sensitive variable |
| token_type | Categorical | The schema of the token, i.e., ERC721 or ERC1155 | Original | Not sensitive variable |
| asset_contract_type | Categorical | The asset typology, i.e., non-fungible or semi-fungible | Original | Not sensitive variable |
| asset_type | Categorical | Whether the asset was involved in a simple or bundle transaction | Original | Not sensitive variable |
## How to use
Data provided within this repository can be straightforwardly loaded via the *datasets* library as follows:
```python
from datasets import load_dataset
dataset = load_dataset("MLNTeam-Unical/NFT-70M_transactions")
```
Complementary data involving textual and visual embeddings can be integrated as follows:
```python
from datasets import load_dataset
import numpy as np
transactions_dataset=load_dataset("MLNTeam-Unical/NFT-70M_transactions")
image_dataset=load_dataset("MLNTeam-Unical/NFT-70M_image")
text_dataset=load_dataset("MLNTeam-Unical/NFT-70M_text")
# Mapping from image_id to the row_index within the image dataset
image_id2row_index={int(id):k for k,id in enumerate(image_dataset["train"]["id"])}
# Mapping from text_id to row_index within the text dataset
text_id2row_index={int(id):k for k,id in enumerate(text_dataset["train"]["id"])}
def get_image_embedding(image_id,image_id2row_index,image_dataset):
# If the mapping contains the image, the embedding exists
idx_emb=image_id2row_index.get(int(image_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(image_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
def get_text_embedding(text_id,text_id2row_index,text_dataset):
# If the mapping contains the text, the embedding exists
idx_emb=text_id2row_index.get(int(text_id),None)
if idx_emb:
# If the embedding exists, return it
return np.array(text_dataset["train"].select([idx_emb])["emb"][0])
else:
return None
### USAGE EXAMPLE ###
# Select transaction_id
transaction_id=120
# Get the image_id (e.g., collection_image or nft_image)
id_image=transactions_dataset["train"].select([transaction_id])["collection_image"][0]
# Get the image
image_embedding=get_image_embedding(id_image,image_id2row_index,image_dataset)
# Get the text_id
id_text=transactions_dataset["train"].select([transaction_id])["collection_description"][0]
# Get the text
text_embedding=get_text_embedding(id_text,text_id2row_index,text_dataset)
```
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.*
|
MLNTeam-Unical/NFT-70M_transactions
|
[
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:text-retrieval",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-nc-4.0",
"Non-fungible Tokens",
"Crypto",
"Web3",
"Art",
"Multimodal Learning",
"doi:10.57967/hf/1179",
"region:us"
] |
2023-09-26T14:48:21+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["time-series-forecasting", "text-classification", "feature-extraction", "text-generation", "zero-shot-classification", "text2text-generation", "sentence-similarity", "image-classification", "image-to-text", "text-to-image", "text-retrieval"], "pretty_name": "NFT-70M_transactions", "dataset_info": {"features": [{"name": "num_sales", "dtype": "int64"}, {"name": "fees_seller", "dtype": "float64"}, {"name": "fees_opensea", "dtype": "float64"}, {"name": "fees_seller_usd", "dtype": "float64"}, {"name": "fees_opensea_usd", "dtype": "float64"}, {"name": "tx_timestamp", "dtype": "string"}, {"name": "price", "dtype": "float64"}, {"name": "gain", "dtype": "float64"}, {"name": "usd_price", "dtype": "float64"}, {"name": "usd_gain", "dtype": "float64"}, {"name": "token", "dtype": "string"}, {"name": "to_eth", "dtype": "float64"}, {"name": "to_usd", "dtype": "float64"}, {"name": "created_date", "dtype": "string"}, {"name": "chain", "dtype": "string"}, {"name": "token_type", "dtype": "string"}, {"name": "asset_contract_type", "dtype": "string"}, {"name": "asset_type", "dtype": "string"}, {"name": "payout_collection_address", "dtype": "int64"}, {"name": "from_account", "dtype": "int64"}, {"name": "to_account", "dtype": "int64"}, {"name": "seller_account", "dtype": "int64"}, {"name": "winner_account", "dtype": "int64"}, {"name": "contract_address", "dtype": "int64"}, {"name": "nft_image", "dtype": "int64"}, {"name": "collection_image", "dtype": "int64"}, {"name": "token_id", "dtype": "int64"}, {"name": "nft_name", "dtype": "int64"}, {"name": "nft_description", "dtype": "int64"}, {"name": "collection_name", "dtype": "int64"}, {"name": "collection_description", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 21291348001, "num_examples": 70972143}], "download_size": 6633664673, "dataset_size": 21291348001}, "tags": ["Non-fungible Tokens", "Crypto", "Web3", "Art", "Multimodal Learning"]}
|
2023-10-03T06:15:49+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #doi-10.57967/hf/1179 #region-us
|
Dataset Card for "NFT-70M\_transactions"
========================================
Dataset summary
---------------
The *NFT-70M\_transactions* dataset is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from OpenSea, the leading trading platform in the Web3 ecosystem.
With more than 70M transactions enriched with metadata, this dataset is conceived to support a wide range of tasks, ranging from sequential and transactional data processing/analysis to graph-based modeling of the complex relationships between traders.
Besides, the availability of textual and image contents further amplifies the modeling capabilities and usage opportunities of this dataset, making it a unique and comprehensive multimodal source of information for delving into the NFT landscape.
This dataset can serve as a benchmark for various innovative and impactful tasks within the crypto landscape, such as projecting NFT prices or detecting fraudolent and wash trading activities.
Furthermore, the multimodal nature of the dataset fosters the development of classification models, as well as textual and visual generative models.
Data anonymization
------------------
We point out that the collected NFT transactions and metadata from OpenSea are publicly distributed on blockchain.
For our purposes of re-distribution, we are also committed to ensure non-disclosure of information that might lead to identifying the NFT creators, in order to be compliant with privacy-preserving requirements and to avoid violation of data protection regulations and of property rights.
In this respect, we carried out three actions:
* Values of all variables describing non-sensitive information were kept in their original form;
* Values of all variables describing sensitive information were anonymized, in a one-way, non-revertible mode;
* URLs of image data and textual contents (i.e., NFT images and their descriptions) were replaced by identifiers to numerical vectors that represent an encrypted representation (i.e., embeddings) of the image/text contents obtained via neural network models. Such embeddings are eventually provided in place of their original image and text data,
and can be found in the NFT-70M\_image and NFT-70M\_text supplementary datasets, respectively.
Data Fields
-----------
How to use
----------
Data provided within this repository can be straightforwardly loaded via the *datasets* library as follows:
Complementary data involving textual and visual embeddings can be integrated as follows:
Ethical use of data and informed consent
----------------------------------------
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
* L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: URL
* L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: URL
* D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: URL
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge OpenSea API.
Liability statement
-------------------
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.*
|
[] |
[
"TAGS\n#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #doi-10.57967/hf/1179 #region-us \n"
] |
[
200
] |
[
"passage: TAGS\n#task_categories-time-series-forecasting #task_categories-text-classification #task_categories-feature-extraction #task_categories-text-generation #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-sentence-similarity #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-text-retrieval #size_categories-10M<n<100M #language-English #license-cc-by-nc-4.0 #Non-fungible Tokens #Crypto #Web3 #Art #Multimodal Learning #doi-10.57967/hf/1179 #region-us \n"
] |
f177f0106001c200ee671a96dc6388b295ea19c6
|
# Dataset of kitazawa_hagumi (BanG Dream!)
This is the dataset of kitazawa_hagumi (BanG Dream!), containing 93 images and their tags.
The core tags of this character are `short_hair, orange_hair, bangs, orange_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 93 | 112.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitazawa_hagumi_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 93 | 78.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitazawa_hagumi_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 224 | 149.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitazawa_hagumi_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 93 | 102.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitazawa_hagumi_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 224 | 191.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitazawa_hagumi_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kitazawa_hagumi_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, school_uniform, solo, antenna_hair, blush, happy_birthday, smile, dated, long_sleeves, white_sailor_collar, brown_dress, character_name, open_mouth, red_ribbon, upper_body, white_background |
| 1 | 8 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, :d, collarbone, open_mouth, shirt, white_background, antenna_hair, brown_eyes, simple_background, upper_body, breasts, jacket |
| 2 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, earrings, short_sleeves, holding, skirt, solo, frills, blush, hair_bow, hat, knee_boots, lace-up_boots, star_(symbol), thighhighs, white_gloves, blue_bow, confetti, electric_guitar, grin, open_mouth, shirt, standing, striped_bow, white_background |
| 3 | 7 |  |  |  |  |  | looking_at_viewer, smile, solo, animal_ears, shorts, 1girl, maple_leaf, tail, autumn_leaves, blush, hair_flower, japanese_clothes, open_mouth, bow, detached_sleeves, earrings, frills, long_sleeves, one_eye_closed, tassel |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | school_uniform | solo | antenna_hair | blush | happy_birthday | smile | dated | long_sleeves | white_sailor_collar | brown_dress | character_name | open_mouth | red_ribbon | upper_body | white_background | :d | collarbone | shirt | brown_eyes | simple_background | breasts | jacket | earrings | short_sleeves | holding | skirt | frills | hair_bow | hat | knee_boots | lace-up_boots | star_(symbol) | thighhighs | white_gloves | blue_bow | confetti | electric_guitar | grin | standing | striped_bow | animal_ears | shorts | maple_leaf | tail | autumn_leaves | hair_flower | japanese_clothes | bow | detached_sleeves | one_eye_closed | tassel |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-----------------|:-------|:---------------|:--------|:-----------------|:--------|:--------|:---------------|:----------------------|:--------------|:-----------------|:-------------|:-------------|:-------------|:-------------------|:-----|:-------------|:--------|:-------------|:--------------------|:----------|:---------|:-----------|:----------------|:----------|:--------|:---------|:-----------|:------|:-------------|:----------------|:----------------|:-------------|:---------------|:-----------|:-----------|:------------------|:-------|:-----------|:--------------|:--------------|:---------|:-------------|:-------|:----------------|:--------------|:-------------------|:------|:-------------------|:-----------------|:---------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | | X | | X | | | | | | | | X | | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | X | | X | | X | | X | | | | X | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kitazawa_hagumi_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T15:03:44+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:57:39+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kitazawa\_hagumi (BanG Dream!)
=========================================
This is the dataset of kitazawa\_hagumi (BanG Dream!), containing 93 images and their tags.
The core tags of this character are 'short\_hair, orange\_hair, bangs, orange\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
31ae39321bb39fbebcb9c95ebaae1e58974484b1
|
# Dataset Card for "mlrs-pos-mt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
amitness/mlrs-pos-mt
|
[
"region:us"
] |
2023-09-26T15:05:32+00:00
|
{"dataset_info": {"features": [{"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "ADJ", "1": "ADV", "2": "COMP", "3": "CONJ_CORD", "4": "CONJ_SUB", "5": "DEF", "6": "FOC", "7": "FUT", "8": "GEN", "9": "GEN_DEF", "10": "GEN_PRON", "11": "HEMM", "12": "INT", "13": "KIEN", "14": "LIL", "15": "LIL_DEF", "16": "LIL_PRON", "17": "NEG", "18": "NOUN", "19": "NOUN_PROP", "20": "NUM_CRD", "21": "NUM_FRC", "22": "NUM_ORD", "23": "NUM_WHD", "24": "PART_ACT", "25": "PART_PASS", "26": "PREP", "27": "PREP_DEF", "28": "PREP_PRON", "29": "PROG", "30": "PRON_DEM", "31": "PRON_DEM_DEF", "32": "PRON_INDEF", "33": "PRON_INT", "34": "PRON_PERS", "35": "PRON_PERS_NEG", "36": "PRON_REC", "37": "PRON_REF", "38": "QUAN", "39": "VERB", "40": "VERB_PSEU", "41": "X_ABV", "42": "X_BOR", "43": "X_DIG", "44": "X_ENG", "45": "X_FOR", "46": "X_PUN"}}}}, {"name": "tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1443609, "num_examples": 4935}, {"name": "validation", "num_bytes": 234214, "num_examples": 616}, {"name": "test", "num_bytes": 212745, "num_examples": 616}], "download_size": 0, "dataset_size": 1890568}}
|
2023-09-26T15:27:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mlrs-pos-mt"
More Information needed
|
[
"# Dataset Card for \"mlrs-pos-mt\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mlrs-pos-mt\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mlrs-pos-mt\"\n\nMore Information needed"
] |
94308a70f4d72517b8b160c4d9f32409c65a1212
|
# Dataset of tamade_chiyu/珠手ちゆ (BanG Dream!)
This is the dataset of tamade_chiyu/珠手ちゆ (BanG Dream!), containing 107 images and their tags.
The core tags of this character are `long_hair, blue_eyes, bangs, red_hair, ahoge, animal_ears, headphones, fake_animal_ears, animal_ear_headphones, cat_ear_headphones, hair_between_eyes, very_long_hair, v-shaped_eyebrows`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 107 | 140.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 107 | 80.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 250 | 171.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 107 | 125.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 250 | 247.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tamade_chiyu_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tamade_chiyu_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 49 |  |  |  |  |  | white_shirt, 1girl, long_sleeves, solo, looking_at_viewer, red_necktie, collared_shirt, striped_necktie, blazer, school_uniform, blue_skirt, plaid_skirt, blush, pleated_skirt, black_jacket, open_mouth, open_jacket, blue_jacket, smile, white_background, simple_background |
| 1 | 8 |  |  |  |  |  | 1girl, long_sleeves, solo, looking_at_viewer, black_choker, black_gloves, black_shorts, collarbone, fingerless_gloves, short_shorts, black_jacket, blush, grin, open_clothes, sidelocks, teeth, white_shirt, belt, holding, microphone, open_mouth, pink_hair, pointing, standing, upper_body, white_jacket |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | white_shirt | 1girl | long_sleeves | solo | looking_at_viewer | red_necktie | collared_shirt | striped_necktie | blazer | school_uniform | blue_skirt | plaid_skirt | blush | pleated_skirt | black_jacket | open_mouth | open_jacket | blue_jacket | smile | white_background | simple_background | black_choker | black_gloves | black_shorts | collarbone | fingerless_gloves | short_shorts | grin | open_clothes | sidelocks | teeth | belt | holding | microphone | pink_hair | pointing | standing | upper_body | white_jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:--------|:---------------|:-------|:--------------------|:--------------|:-----------------|:------------------|:---------|:-----------------|:-------------|:--------------|:--------|:----------------|:---------------|:-------------|:--------------|:--------------|:--------|:-------------------|:--------------------|:---------------|:---------------|:---------------|:-------------|:--------------------|:---------------|:-------|:---------------|:------------|:--------|:-------|:----------|:-------------|:------------|:-----------|:-----------|:-------------|:---------------|
| 0 | 49 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | | | | | | | | X | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/tamade_chiyu_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T15:29:30+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:05:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of tamade\_chiyu/珠手ちゆ (BanG Dream!)
===========================================
This is the dataset of tamade\_chiyu/珠手ちゆ (BanG Dream!), containing 107 images and their tags.
The core tags of this character are 'long\_hair, blue\_eyes, bangs, red\_hair, ahoge, animal\_ears, headphones, fake\_animal\_ears, animal\_ear\_headphones, cat\_ear\_headphones, hair\_between\_eyes, very\_long\_hair, v-shaped\_eyebrows', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
02b4572c0d6e996a6845c36b3f094e946af2a316
|
# Dataset Card for "b6112e1b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/b6112e1b
|
[
"region:us"
] |
2023-09-26T15:29:47+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 166, "num_examples": 10}], "download_size": 1318, "dataset_size": 166}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T15:29:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "b6112e1b"
More Information needed
|
[
"# Dataset Card for \"b6112e1b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"b6112e1b\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"b6112e1b\"\n\nMore Information needed"
] |
6c22d5df4e21b35c90de1fad09dab283520a0cca
|
# Dataset Card for "7aa2df49"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/7aa2df49
|
[
"region:us"
] |
2023-09-26T15:34:14+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 172, "num_examples": 10}], "download_size": 1339, "dataset_size": 172}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T15:34:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "7aa2df49"
More Information needed
|
[
"# Dataset Card for \"7aa2df49\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"7aa2df49\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"7aa2df49\"\n\nMore Information needed"
] |
544a0a34ef7ad63caebe2e2eb918c297265e903e
|
# Dataset of udagawa_tomoe/宇田川巴 (BanG Dream!)
This is the dataset of udagawa_tomoe/宇田川巴 (BanG Dream!), containing 162 images and their tags.
The core tags of this character are `red_hair, long_hair, bangs, blue_eyes, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 162 | 120.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_tomoe_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 162 | 96.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_tomoe_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 263 | 158.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_tomoe_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 162 | 112.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_tomoe_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 263 | 186.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/udagawa_tomoe_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/udagawa_tomoe_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, solo, midriff, drumsticks, navel, crop_top, holding, looking_at_viewer, drum_set, choker, bracelet, jacket, grin, studded_belt, vest, blush, hair_ornament, pants, skirt |
| 1 | 7 |  |  |  |  |  | black_gloves, looking_at_viewer, 1girl, fingerless_gloves, long_sleeves, belt, crop_top, midriff, shirt, solo, :d, black_choker, black_shorts, blush, drumsticks, holding, open_jacket, open_mouth, torn_clothes, black_footwear, boots, fishnets, green_jacket, navel, necklace, pants, ponytail, short_shorts |
| 2 | 5 |  |  |  |  |  | grin, looking_at_viewer, white_shirt, 1girl, black_jacket, jewelry, long_sleeves, solo, upper_body, white_background, white_gloves, 2girls, black_necktie, green_eyes, hair_between_eyes, ponytail, sidelocks, simple_background, v-shaped_eyebrows |
| 3 | 11 |  |  |  |  |  | 1girl, school_uniform, solo, white_shirt, blush, collared_shirt, jacket, grin, :d, ^_^, open_mouth, striped_necktie, upper_body, fang, long_sleeves, plaid_skirt, pleated_skirt, white_outline |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | midriff | drumsticks | navel | crop_top | holding | looking_at_viewer | drum_set | choker | bracelet | jacket | grin | studded_belt | vest | blush | hair_ornament | pants | skirt | black_gloves | fingerless_gloves | long_sleeves | belt | shirt | :d | black_choker | black_shorts | open_jacket | open_mouth | torn_clothes | black_footwear | boots | fishnets | green_jacket | necklace | ponytail | short_shorts | white_shirt | black_jacket | jewelry | upper_body | white_background | white_gloves | 2girls | black_necktie | green_eyes | hair_between_eyes | sidelocks | simple_background | v-shaped_eyebrows | school_uniform | collared_shirt | ^_^ | striped_necktie | fang | plaid_skirt | pleated_skirt | white_outline |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:-------------|:--------|:-----------|:----------|:--------------------|:-----------|:---------|:-----------|:---------|:-------|:---------------|:-------|:--------|:----------------|:--------|:--------|:---------------|:--------------------|:---------------|:-------|:--------|:-----|:---------------|:---------------|:--------------|:-------------|:---------------|:-----------------|:--------|:-----------|:---------------|:-----------|:-----------|:---------------|:--------------|:---------------|:----------|:-------------|:-------------------|:---------------|:---------|:----------------|:-------------|:--------------------|:------------|:--------------------|:--------------------|:-----------------|:-----------------|:------|:------------------|:-------|:--------------|:----------------|:----------------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | | | | X | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | | | | | | | | | | X | X | | | X | | | | | | X | | | X | | | | X | | | | | | | | | X | | | X | | | | | | | | | | X | X | X | X | X | X | X | X |
|
CyberHarem/udagawa_tomoe_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T15:35:48+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:34:07+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of udagawa\_tomoe/宇田川巴 (BanG Dream!)
============================================
This is the dataset of udagawa\_tomoe/宇田川巴 (BanG Dream!), containing 162 images and their tags.
The core tags of this character are 'red\_hair, long\_hair, bangs, blue\_eyes, earrings', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
daf1c299f91bf06eda8ce59731cb402961b36fbb
|
# Dataset Card for "xxt_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yuanmei424/xxt_en
|
[
"region:us"
] |
2023-09-26T15:47:07+00:00
|
{"dataset_info": {"features": [{"name": "edit_prompt", "dtype": "string"}, {"name": "input_image", "dtype": "image"}, {"name": "edited_image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 5329195147.25, "num_examples": 2283951}], "download_size": 526250170, "dataset_size": 5329195147.25}}
|
2023-09-26T18:00:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "xxt_en"
More Information needed
|
[
"# Dataset Card for \"xxt_en\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"xxt_en\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"xxt_en\"\n\nMore Information needed"
] |
c858cee986bebfecbfabc10996744fd4e906c9fd
|
# Dataset Card for "1b874213"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/1b874213
|
[
"region:us"
] |
2023-09-26T15:50:26+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 161, "num_examples": 10}], "download_size": 1306, "dataset_size": 161}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-26T15:50:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "1b874213"
More Information needed
|
[
"# Dataset Card for \"1b874213\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"1b874213\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"1b874213\"\n\nMore Information needed"
] |
3befd66c818162734266453dc1c85101fbc5beb3
|
# Dataset of kirigaya_touko/桐ヶ谷透子 (BanG Dream!)
This is the dataset of kirigaya_touko/桐ヶ谷透子 (BanG Dream!), containing 82 images and their tags.
The core tags of this character are `blonde_hair, long_hair, bangs, brown_eyes, breasts, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 82 | 116.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirigaya_touko_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 82 | 68.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirigaya_touko_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 202 | 143.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirigaya_touko_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 82 | 103.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirigaya_touko_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 202 | 198.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirigaya_touko_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kirigaya_touko_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blush, open_mouth, sweat, 1boy, hetero, solo_focus, navel, nipples, penis, large_breasts, mosaic_censoring, pussy, smile, bare_shoulders, bikini, clothed_female_nude_male, collarbone, medium_breasts, tongue_out, cum, heart, looking_at_viewer, saliva, sex, spread_legs, stomach, vaginal |
| 1 | 13 |  |  |  |  |  | 1girl, smile, solo, looking_at_viewer, blush, simple_background, collarbone, long_sleeves, necklace, white_background, black_shirt, open_mouth, blue_jacket, hood, one_eye_closed, ;d, open_jacket, short_shorts, standing, upper_body |
| 2 | 16 |  |  |  |  |  | looking_at_viewer, smile, hat, 1girl, black_gloves, electric_guitar, blush, jewelry, solo, holding, short_sleeves, skirt, neck_ribbon, standing, white_headwear, black_ribbon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | sweat | 1boy | hetero | solo_focus | navel | nipples | penis | large_breasts | mosaic_censoring | pussy | smile | bare_shoulders | bikini | clothed_female_nude_male | collarbone | medium_breasts | tongue_out | cum | heart | looking_at_viewer | saliva | sex | spread_legs | stomach | vaginal | solo | simple_background | long_sleeves | necklace | white_background | black_shirt | blue_jacket | hood | one_eye_closed | ;d | open_jacket | short_shorts | standing | upper_body | hat | black_gloves | electric_guitar | jewelry | holding | short_sleeves | skirt | neck_ribbon | white_headwear | black_ribbon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:--------|:-------|:---------|:-------------|:--------|:----------|:--------|:----------------|:-------------------|:--------|:--------|:-----------------|:---------|:---------------------------|:-------------|:-----------------|:-------------|:------|:--------|:--------------------|:---------|:------|:--------------|:----------|:----------|:-------|:--------------------|:---------------|:-----------|:-------------------|:--------------|:--------------|:-------|:-----------------|:-----|:--------------|:---------------|:-----------|:-------------|:------|:---------------|:------------------|:----------|:----------|:----------------|:--------|:--------------|:-----------------|:---------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | | | | | | | | | | | X | | | | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | | | | | | | | | | | | X | | | | | | | | | X | | | | | | X | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kirigaya_touko_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T16:07:10+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:16:59+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of kirigaya\_touko/桐ヶ谷透子 (BanG Dream!)
==============================================
This is the dataset of kirigaya\_touko/桐ヶ谷透子 (BanG Dream!), containing 82 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, bangs, brown\_eyes, breasts, earrings', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
7e19fc224b9fca8d95aa4301c58eb220a70ef079
|
# Dataset of yashio_rui/八潮瑠唯 (BanG Dream!)
This is the dataset of yashio_rui/八潮瑠唯 (BanG Dream!), containing 140 images and their tags.
The core tags of this character are `short_hair, bangs, black_hair, hair_between_eyes, breasts, purple_eyes, large_breasts, earrings, pink_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 140 | 183.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yashio_rui_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 140 | 104.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yashio_rui_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 340 | 220.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yashio_rui_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 140 | 162.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yashio_rui_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 340 | 312.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yashio_rui_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yashio_rui_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, upper_body, blush, jewelry, shirt, solo, white_background, long_sleeves, simple_background, turtleneck, closed_mouth, green_sweater, ribbed_sweater |
| 1 | 28 |  |  |  |  |  | 1girl, solo, looking_at_viewer, playing_instrument, standing, violin, jewelry, frills, parted_lips, buttons, short_sleeves, dress, holding_instrument, ribbon, skirt, half_gloves, smile, black_gloves, blue_background, mini_hat, shirt, asymmetrical_hair, bow_(music), cowboy_shot, hair_ornament, white_headwear |
| 2 | 10 |  |  |  |  |  | long_sleeves, 1girl, brown_skirt, looking_at_viewer, solo, plaid_skirt, black_shirt, necklace, parted_lips, standing, white_background, simple_background, asymmetrical_hair, collarbone, medium_breasts, pendant, black_belt, black_sweater, cowboy_shot, frilled_sleeves, green_belt, hand_up, ribbed_shirt, vertical-striped_shirt |
| 3 | 9 |  |  |  |  |  | long_sleeves, serafuku, 1girl, looking_at_viewer, pleated_skirt, blue_shirt, solo, blue_skirt, standing, brown_hair, closed_mouth, cowboy_shot, white_neckerchief, white_sailor_collar |
| 4 | 8 |  |  |  |  |  | 1girl, solo, cleavage, collarbone, looking_at_viewer, bare_shoulders, blush, bracelet, navel, smile, outdoors, black_choker, closed_mouth, cowboy_shot, day, frills, water, arm_up, black_bikini, brown_eyes, off-shoulder_bikini, sky, standing, stomach, wet |
| 5 | 5 |  |  |  |  |  | blush, collarbone, girl_on_top, navel, solo_focus, sweat, 1girl, hetero, looking_at_viewer, mosaic_censoring, nipples, penis, pov, pussy, sex, stomach, vaginal, 1boy, indoors, motion_lines, spread_legs, 2girls, asymmetrical_hair, bikini, blurry, cleavage, completely_nude, dutch_angle, heavy_breathing, leash, on_back, open_clothes, open_mouth, parted_lips, raised_eyebrows, squatting_cowgirl_position |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | upper_body | blush | jewelry | shirt | solo | white_background | long_sleeves | simple_background | turtleneck | closed_mouth | green_sweater | ribbed_sweater | looking_at_viewer | playing_instrument | standing | violin | frills | parted_lips | buttons | short_sleeves | dress | holding_instrument | ribbon | skirt | half_gloves | smile | black_gloves | blue_background | mini_hat | asymmetrical_hair | bow_(music) | cowboy_shot | hair_ornament | white_headwear | brown_skirt | plaid_skirt | black_shirt | necklace | collarbone | medium_breasts | pendant | black_belt | black_sweater | frilled_sleeves | green_belt | hand_up | ribbed_shirt | vertical-striped_shirt | serafuku | pleated_skirt | blue_shirt | blue_skirt | brown_hair | white_neckerchief | white_sailor_collar | cleavage | bare_shoulders | bracelet | navel | outdoors | black_choker | day | water | arm_up | black_bikini | brown_eyes | off-shoulder_bikini | sky | stomach | wet | girl_on_top | solo_focus | sweat | hetero | mosaic_censoring | nipples | penis | pov | pussy | sex | vaginal | 1boy | indoors | motion_lines | spread_legs | 2girls | bikini | blurry | completely_nude | dutch_angle | heavy_breathing | leash | on_back | open_clothes | open_mouth | raised_eyebrows | squatting_cowgirl_position |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:----------|:--------|:-------|:-------------------|:---------------|:--------------------|:-------------|:---------------|:----------------|:-----------------|:--------------------|:---------------------|:-----------|:---------|:---------|:--------------|:----------|:----------------|:--------|:---------------------|:---------|:--------|:--------------|:--------|:---------------|:------------------|:-----------|:--------------------|:--------------|:--------------|:----------------|:-----------------|:--------------|:--------------|:--------------|:-----------|:-------------|:-----------------|:----------|:-------------|:----------------|:------------------|:-------------|:----------|:---------------|:-------------------------|:-----------|:----------------|:-------------|:-------------|:-------------|:--------------------|:----------------------|:-----------|:-----------------|:-----------|:--------|:-----------|:---------------|:------|:--------|:---------|:---------------|:-------------|:----------------------|:------|:----------|:------|:--------------|:-------------|:--------|:---------|:-------------------|:----------|:--------|:------|:--------|:------|:----------|:-------|:----------|:---------------|:--------------|:---------|:---------|:---------|:------------------|:--------------|:------------------|:--------|:----------|:---------------|:-------------|:------------------|:-----------------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 28 |  |  |  |  |  | X | | | X | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | | | | X | X | X | X | | | | | X | | X | | | X | | | | | | | | | | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | | | | X | | X | | | X | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | | | X | | | | | X | | | X | | X | | X | | | | | | | | | X | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/yashio_rui_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T16:10:00+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:06:17+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of yashio\_rui/八潮瑠唯 (BanG Dream!)
=========================================
This is the dataset of yashio\_rui/八潮瑠唯 (BanG Dream!), containing 140 images and their tags.
The core tags of this character are 'short\_hair, bangs, black\_hair, hair\_between\_eyes, breasts, purple\_eyes, large\_breasts, earrings, pink\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
6c933760e5e6dfba752a3ba8f753c76148ae6222
|
# Dataset of nyubara_reona/鳰原令王那 (BanG Dream!)
This is the dataset of nyubara_reona/鳰原令王那 (BanG Dream!), containing 52 images and their tags.
The core tags of this character are `multicolored_hair, bangs, long_hair, twintails, blunt_bangs, two-tone_hair, pink_hair, hair_ornament, blue_hair, sidelocks, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 52 | 75.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 52 | 40.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 125 | 88.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 52 | 65.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 125 | 131.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nyubara_reona_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nyubara_reona_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | long_sleeves, 1girl, solo, looking_at_viewer, hair_bobbles, open_mouth, pink_skirt, thighhighs, :d, blush, frilled_skirt, simple_background, very_long_hair, white_background, blue_shirt, collarbone, white_jacket, bracelet, full_body, open_jacket, purple_shirt, shoes, star_(symbol), upper_teeth_only |
| 1 | 9 |  |  |  |  |  | hair_bobbles, 1girl, blush, looking_at_viewer, solo, shirt, long_sleeves, open_mouth, upper_body, heart, holding, white_background, :d, bracelet, collarbone, pink_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | long_sleeves | 1girl | solo | looking_at_viewer | hair_bobbles | open_mouth | pink_skirt | thighhighs | :d | blush | frilled_skirt | simple_background | very_long_hair | white_background | blue_shirt | collarbone | white_jacket | bracelet | full_body | open_jacket | purple_shirt | shoes | star_(symbol) | upper_teeth_only | shirt | upper_body | heart | holding | pink_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:--------|:-------|:--------------------|:---------------|:-------------|:-------------|:-------------|:-----|:--------|:----------------|:--------------------|:-----------------|:-------------------|:-------------|:-------------|:---------------|:-----------|:------------|:--------------|:---------------|:--------|:----------------|:-------------------|:--------|:-------------|:--------|:----------|:------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | X | | | X | X | | | | X | | X | | X | | | | | | | X | X | X | X | X |
|
CyberHarem/nyubara_reona_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T16:32:18+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:30:08+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of nyubara\_reona/鳰原令王那 (BanG Dream!)
=============================================
This is the dataset of nyubara\_reona/鳰原令王那 (BanG Dream!), containing 52 images and their tags.
The core tags of this character are 'multicolored\_hair, bangs, long\_hair, twintails, blunt\_bangs, two-tone\_hair, pink\_hair, hair\_ornament, blue\_hair, sidelocks, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
0f931df98e45c7790a1565e8719824602db99f8f
|
https://youtu.be/gn0Z_glYJ90?list=PLXA0IWa3BpHnrfGY39YxPYFvssnwD8awg&t=989
|
lunarflu/generative-AI-meets-responsible-AI-practical-challenges-and-opportunities
|
[
"region:us"
] |
2023-09-26T16:49:41+00:00
|
{}
|
2023-09-26T16:51:39+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
2f7482dacc8f102de6329851ee7f1c2304092dae
|
# Dataset Card for "top10_primary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ricardosantoss/top10_primary
|
[
"region:us"
] |
2023-09-26T16:49:42+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "TEXT", "dtype": "string"}, {"name": "ICD9_CODE", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 121066961, "num_examples": 12699}, {"name": "test", "num_bytes": 23919656, "num_examples": 2500}, {"name": "validation", "num_bytes": 24070118, "num_examples": 2500}], "download_size": 95077634, "dataset_size": 169056735}}
|
2023-09-26T16:50:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "top10_primary"
More Information needed
|
[
"# Dataset Card for \"top10_primary\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"top10_primary\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"top10_primary\"\n\nMore Information needed"
] |
9d604e8921b1b242507385f84e10fc45d95e9c87
|
# Dataset of hiromachi_nanami (BanG Dream!)
This is the dataset of hiromachi_nanami (BanG Dream!), containing 104 images and their tags.
The core tags of this character are `bangs, pink_eyes, long_hair, pink_hair, hair_ornament, two_side_up, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 104 | 154.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiromachi_nanami_bangdream/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 104 | 87.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiromachi_nanami_bangdream/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 254 | 186.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiromachi_nanami_bangdream/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 104 | 134.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiromachi_nanami_bangdream/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 254 | 265.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiromachi_nanami_bangdream/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hiromachi_nanami_bangdream',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 30 |  |  |  |  |  | 1girl, looking_at_viewer, solo, hat, earrings, blush, open_mouth, :d, long_sleeves, skirt, dress, neck_ribbon, blonde_hair, electric_guitar, black_gloves, bow, bass_guitar, bug, frilled_sleeves, holding_instrument, white_headwear |
| 1 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, long_sleeves, belt, necklace, pink_dress, open_mouth, :d, hair_ribbon, pinafore_dress, standing, upper_teeth_only, white_shirt |
| 2 | 12 |  |  |  |  |  | 1girl, solo, blush, long_sleeves, serafuku, looking_at_viewer, white_background, cat_hair_ornament, open_mouth, pleated_skirt, :d, black_shirt, simple_background, :3, blue_shirt, blue_skirt, neckerchief, white_sailor_collar, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | hat | earrings | blush | open_mouth | :d | long_sleeves | skirt | dress | neck_ribbon | blonde_hair | electric_guitar | black_gloves | bow | bass_guitar | bug | frilled_sleeves | holding_instrument | white_headwear | belt | necklace | pink_dress | hair_ribbon | pinafore_dress | standing | upper_teeth_only | white_shirt | serafuku | white_background | cat_hair_ornament | pleated_skirt | black_shirt | simple_background | :3 | blue_shirt | blue_skirt | neckerchief | white_sailor_collar |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:------|:-----------|:--------|:-------------|:-----|:---------------|:--------|:--------|:--------------|:--------------|:------------------|:---------------|:------|:--------------|:------|:------------------|:---------------------|:-----------------|:-------|:-----------|:-------------|:--------------|:-----------------|:-----------|:-------------------|:--------------|:-----------|:-------------------|:--------------------|:----------------|:--------------|:--------------------|:-----|:-------------|:-------------|:--------------|:----------------------|
| 0 | 30 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | | | X | X | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/hiromachi_nanami_bangdream
|
[
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] |
2023-09-26T16:52:12+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
|
2024-01-15T18:59:51+00:00
|
[] |
[] |
TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
|
Dataset of hiromachi\_nanami (BanG Dream!)
==========================================
This is the dataset of hiromachi\_nanami (BanG Dream!), containing 104 images and their tags.
The core tags of this character are 'bangs, pink\_eyes, long\_hair, pink\_hair, hair\_ornament, two\_side\_up, ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
|
[
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
[
44,
61,
5,
4
] |
[
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.### Raw Text Version### Table Version"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.