url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 818M
2.44B
| node_id
stringlengths 18
32
| number
int64 1.96k
7.08k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| comments
listlengths 2
2
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2680
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2680/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2680/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2680/events
|
https://github.com/huggingface/datasets/pull/2680
| 948,649,716 |
MDExOlB1bGxSZXF1ZXN0NjkzNDYyNzY3
| 2,680 |
feat: 🎸 add paperswithcode id for qasper dataset
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-20T13:22:29 | 2021-07-20T14:04:10 | 2021-07-20T14:04:10 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2680",
"html_url": "https://github.com/huggingface/datasets/pull/2680",
"diff_url": "https://github.com/huggingface/datasets/pull/2680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2680.patch",
"merged_at": "2021-07-20T14:04:10"
}
|
The reverse reference exists on paperswithcode:
https://paperswithcode.com/dataset/qasper
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2680/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2679
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2679/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2679/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2679/events
|
https://github.com/huggingface/datasets/issues/2679
| 948,506,638 |
MDU6SXNzdWU5NDg1MDY2Mzg=
| 2,679 |
Cannot load the blog_authorship_corpus due to codec errors
|
{
"login": "izaskr",
"id": 38069449,
"node_id": "MDQ6VXNlcjM4MDY5NDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/38069449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izaskr",
"html_url": "https://github.com/izaskr",
"followers_url": "https://api.github.com/users/izaskr/followers",
"following_url": "https://api.github.com/users/izaskr/following{/other_user}",
"gists_url": "https://api.github.com/users/izaskr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izaskr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izaskr/subscriptions",
"organizations_url": "https://api.github.com/users/izaskr/orgs",
"repos_url": "https://api.github.com/users/izaskr/repos",
"events_url": "https://api.github.com/users/izaskr/events{/privacy}",
"received_events_url": "https://api.github.com/users/izaskr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-20T10:13:20 | 2021-07-21T17:02:21 | 2021-07-21T13:11:58 |
NONE
| null | null |
## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error similar to the one below was raised for (what seems like) every XML file.
/home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset
builder_instance.download_and_prepare(
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare
self._download_and_prepare(
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2679/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2678
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2678/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2678/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2678/events
|
https://github.com/huggingface/datasets/issues/2678
| 948,471,222 |
MDU6SXNzdWU5NDg0NzEyMjI=
| 2,678 |
Import Error in Kaggle notebook
|
{
"login": "prikmm",
"id": 47216475,
"node_id": "MDQ6VXNlcjQ3MjE2NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/47216475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prikmm",
"html_url": "https://github.com/prikmm",
"followers_url": "https://api.github.com/users/prikmm/followers",
"following_url": "https://api.github.com/users/prikmm/following{/other_user}",
"gists_url": "https://api.github.com/users/prikmm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prikmm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prikmm/subscriptions",
"organizations_url": "https://api.github.com/users/prikmm/orgs",
"repos_url": "https://api.github.com/users/prikmm/repos",
"events_url": "https://api.github.com/users/prikmm/events{/privacy}",
"received_events_url": "https://api.github.com/users/prikmm/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-20T09:28:38 | 2021-07-21T13:59:26 | 2021-07-21T13:03:02 |
NONE
| null | null |
## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-input-9-652e886d387f> in <module>
----> 1 import datasets
/opt/conda/lib/python3.7/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in <module>
36 import pandas as pd
37 import pyarrow as pa
---> 38 import pyarrow.compute as pc
39 from multiprocess import Pool, RLock
40 from tqdm.auto import tqdm
/opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in <module>
16 # under the License.
17
---> 18 from pyarrow._compute import ( # noqa
19 Function,
20 FunctionOptions,
ImportError: /opt/conda/lib/python3.7/site-packages/pyarrow/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Kaggle
- Python version: 3.7.10
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2678/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2677
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2677/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2677/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2677/events
|
https://github.com/huggingface/datasets/issues/2677
| 948,429,788 |
MDU6SXNzdWU5NDg0Mjk3ODg=
| 2,677 |
Error when downloading C4
|
{
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-20T08:37:30 | 2021-07-20T14:41:31 | 2021-07-20T14:38:10 |
NONE
| null | null |
Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2021-07-20 в 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png">
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2677/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2676
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2676/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2676/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2676/events
|
https://github.com/huggingface/datasets/pull/2676
| 947,734,909 |
MDExOlB1bGxSZXF1ZXN0NjkyNjc2NTg5
| 2,676 |
Increase json reader block_size automatically
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T14:51:14 | 2021-07-19T17:51:39 | 2021-07-19T17:51:38 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2676",
"html_url": "https://github.com/huggingface/datasets/pull/2676",
"diff_url": "https://github.com/huggingface/datasets/pull/2676.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2676.patch",
"merged_at": "2021-07-19T17:51:38"
}
|
Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)).
To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines.
By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file.
cc @thomwolf @albertvillanova
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2676/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2675/events
|
https://github.com/huggingface/datasets/pull/2675
| 947,657,732 |
MDExOlB1bGxSZXF1ZXN0NjkyNjEwNTA1
| 2,675 |
Parallelize ETag requests
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T13:30:42 | 2021-07-19T19:33:25 | 2021-07-19T19:33:25 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2675",
"html_url": "https://github.com/huggingface/datasets/pull/2675",
"diff_url": "https://github.com/huggingface/datasets/pull/2675.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2675.patch",
"merged_at": "2021-07-19T19:33:25"
}
|
Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed.
In this I made the ETag requests parallel using multithreading. There is also a tqdm progress bar that shows up if there are more than 16 data files.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2675/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2674
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2674/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2674/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2674/events
|
https://github.com/huggingface/datasets/pull/2674
| 947,338,202 |
MDExOlB1bGxSZXF1ZXN0NjkyMzMzODU3
| 2,674 |
Fix sacrebleu parameter name
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T07:07:26 | 2021-07-19T08:07:03 | 2021-07-19T08:07:03 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2674",
"html_url": "https://github.com/huggingface/datasets/pull/2674",
"diff_url": "https://github.com/huggingface/datasets/pull/2674.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2674.patch",
"merged_at": "2021-07-19T08:07:03"
}
|
DONE:
- Fix parameter name: `smooth` to `smooth_method`.
- Improve kwargs description.
- Align docs on using a metric.
- Add example of passing additional arguments in using metrics.
Related to #2669.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2674/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2673
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2673/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2673/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2673/events
|
https://github.com/huggingface/datasets/pull/2673
| 947,300,008 |
MDExOlB1bGxSZXF1ZXN0NjkyMzAxMTgw
| 2,673 |
Fix potential DuplicatedKeysError in SQuAD
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T06:08:00 | 2021-07-19T07:08:03 | 2021-07-19T07:08:03 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2673",
"html_url": "https://github.com/huggingface/datasets/pull/2673",
"diff_url": "https://github.com/huggingface/datasets/pull/2673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2673.patch",
"merged_at": "2021-07-19T07:08:03"
}
|
DONE:
- Fix potential DiplicatedKeysError by ensuring keys are unique.
- Align examples in the docs with SQuAD code.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2673/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2672/events
|
https://github.com/huggingface/datasets/pull/2672
| 947,294,605 |
MDExOlB1bGxSZXF1ZXN0NjkyMjk2NDQ4
| 2,672 |
Fix potential DuplicatedKeysError in LibriSpeech
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T06:00:49 | 2021-07-19T06:28:57 | 2021-07-19T06:28:56 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2672",
"html_url": "https://github.com/huggingface/datasets/pull/2672",
"diff_url": "https://github.com/huggingface/datasets/pull/2672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2672.patch",
"merged_at": "2021-07-19T06:28:56"
}
|
DONE:
- Fix unnecessary path join.
- Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2672/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2671
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2671/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2671/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2671/events
|
https://github.com/huggingface/datasets/pull/2671
| 947,273,875 |
MDExOlB1bGxSZXF1ZXN0NjkyMjc5MTM0
| 2,671 |
Mesinesp development and training data sets have been added.
|
{
"login": "aslihanuysall",
"id": 32900185,
"node_id": "MDQ6VXNlcjMyOTAwMTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/32900185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aslihanuysall",
"html_url": "https://github.com/aslihanuysall",
"followers_url": "https://api.github.com/users/aslihanuysall/followers",
"following_url": "https://api.github.com/users/aslihanuysall/following{/other_user}",
"gists_url": "https://api.github.com/users/aslihanuysall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aslihanuysall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aslihanuysall/subscriptions",
"organizations_url": "https://api.github.com/users/aslihanuysall/orgs",
"repos_url": "https://api.github.com/users/aslihanuysall/repos",
"events_url": "https://api.github.com/users/aslihanuysall/events{/privacy}",
"received_events_url": "https://api.github.com/users/aslihanuysall/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-19T05:14:38 | 2021-07-19T07:32:28 | 2021-07-19T06:45:50 |
NONE
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2671",
"html_url": "https://github.com/huggingface/datasets/pull/2671",
"diff_url": "https://github.com/huggingface/datasets/pull/2671.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2671.patch",
"merged_at": null
}
|
https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) training set has a total of 369,368 records.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2671/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2670
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2670/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2670/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2670/events
|
https://github.com/huggingface/datasets/issues/2670
| 947,120,709 |
MDU6SXNzdWU5NDcxMjA3MDk=
| 2,670 |
Using sharding to parallelize indexing
|
{
"login": "ggdupont",
"id": 5583410,
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggdupont",
"html_url": "https://github.com/ggdupont",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions",
"organizations_url": "https://api.github.com/users/ggdupont/orgs",
"repos_url": "https://api.github.com/users/ggdupont/repos",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggdupont/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-18T21:26:26 | 2021-10-07T13:33:25 | null |
CONTRIBUTOR
| null | null |
**Is your feature request related to a problem? Please describe.**
Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding)
**Describe the solution you'd like**
When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data.
Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset.
**Describe alternatives you've considered**
Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25).
**Additional context**
The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2670/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2670/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2669
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2669/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2669/events
|
https://github.com/huggingface/datasets/issues/2669
| 946,982,998 |
MDU6SXNzdWU5NDY5ODI5OTg=
| 2,669 |
Metric kwargs are not passed to underlying external metric f1_score
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-18T08:32:31 | 2021-07-18T18:36:05 | 2021-07-18T11:19:04 |
CONTRIBUTOR
| null | null |
## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to reproduce the bug
```python
import datasets
f1 = datasets.load_metric("f1", keep_in_memory=True, average="min")
f1.add_batch(predictions=[0,2,3], references=[1, 2, 3])
f1.compute()
```
## Expected results
No error, because `average="min"` should be passed correctly to f1_score in sklearn.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute
"f1": f1_score(
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score
return fbeta_score(y_true, y_pred, beta=1, labels=labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score
_, _, f, _ = precision_recall_fscore_support(y_true, y_pred,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support
labels = _check_set_wise_labels(y_true, y_pred, average, labels,
File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels
raise ValueError("Target is %s but average='binary'. Please "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2669/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2668
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2668/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2668/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2668/events
|
https://github.com/huggingface/datasets/pull/2668
| 946,867,622 |
MDExOlB1bGxSZXF1ZXN0NjkxOTY1MTY1
| 2,668 |
Add Russian SuperGLUE
|
{
"login": "slowwavesleep",
"id": 44175589,
"node_id": "MDQ6VXNlcjQ0MTc1NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slowwavesleep",
"html_url": "https://github.com/slowwavesleep",
"followers_url": "https://api.github.com/users/slowwavesleep/followers",
"following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}",
"gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions",
"organizations_url": "https://api.github.com/users/slowwavesleep/orgs",
"repos_url": "https://api.github.com/users/slowwavesleep/repos",
"events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}",
"received_events_url": "https://api.github.com/users/slowwavesleep/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-17T17:41:28 | 2021-07-29T11:50:31 | 2021-07-29T11:50:31 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2668",
"html_url": "https://github.com/huggingface/datasets/pull/2668",
"diff_url": "https://github.com/huggingface/datasets/pull/2668.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2668.patch",
"merged_at": "2021-07-29T11:50:30"
}
|
Hi,
This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2668/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2668/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2667
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2667/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2667/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2667/events
|
https://github.com/huggingface/datasets/pull/2667
| 946,861,908 |
MDExOlB1bGxSZXF1ZXN0NjkxOTYwNzc3
| 2,667 |
Use tqdm from tqdm_utils
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-17T17:06:35 | 2021-07-19T17:39:10 | 2021-07-19T17:32:00 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2667",
"html_url": "https://github.com/huggingface/datasets/pull/2667",
"diff_url": "https://github.com/huggingface/datasets/pull/2667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2667.patch",
"merged_at": "2021-07-19T17:32:00"
}
|
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit ("ba" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https://github.com/huggingface/datasets/issues/2657).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2667/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2667/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2666
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2666/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2666/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2666/events
|
https://github.com/huggingface/datasets/pull/2666
| 946,825,140 |
MDExOlB1bGxSZXF1ZXN0NjkxOTMzMDM1
| 2,666 |
Adds CodeClippy dataset [WIP]
|
{
"login": "arampacha",
"id": 69807323,
"node_id": "MDQ6VXNlcjY5ODA3MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arampacha",
"html_url": "https://github.com/arampacha",
"followers_url": "https://api.github.com/users/arampacha/followers",
"following_url": "https://api.github.com/users/arampacha/following{/other_user}",
"gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arampacha/subscriptions",
"organizations_url": "https://api.github.com/users/arampacha/orgs",
"repos_url": "https://api.github.com/users/arampacha/repos",
"events_url": "https://api.github.com/users/arampacha/events{/privacy}",
"received_events_url": "https://api.github.com/users/arampacha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-17T13:32:04 | 2023-07-26T23:06:01 | 2022-10-03T09:37:35 |
NONE
| true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2666",
"html_url": "https://github.com/huggingface/datasets/pull/2666",
"diff_url": "https://github.com/huggingface/datasets/pull/2666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2666.patch",
"merged_at": null
}
|
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week
https://the-eye.eu/public/AI/training_data/code_clippy_data/
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2666/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2666/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2665
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2665/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2665/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2665/events
|
https://github.com/huggingface/datasets/pull/2665
| 946,822,036 |
MDExOlB1bGxSZXF1ZXN0NjkxOTMwNjky
| 2,665 |
Adds APPS dataset to the hub [WIP]
|
{
"login": "arampacha",
"id": 69807323,
"node_id": "MDQ6VXNlcjY5ODA3MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arampacha",
"html_url": "https://github.com/arampacha",
"followers_url": "https://api.github.com/users/arampacha/followers",
"following_url": "https://api.github.com/users/arampacha/following{/other_user}",
"gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arampacha/subscriptions",
"organizations_url": "https://api.github.com/users/arampacha/orgs",
"repos_url": "https://api.github.com/users/arampacha/repos",
"events_url": "https://api.github.com/users/arampacha/events{/privacy}",
"received_events_url": "https://api.github.com/users/arampacha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-17T13:13:17 | 2022-10-03T09:38:10 | 2022-10-03T09:38:10 |
NONE
| true |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2665",
"html_url": "https://github.com/huggingface/datasets/pull/2665",
"diff_url": "https://github.com/huggingface/datasets/pull/2665.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2665.patch",
"merged_at": null
}
|
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2665/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2665/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2663
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2663/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2663/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2663/events
|
https://github.com/huggingface/datasets/issues/2663
| 946,552,273 |
MDU6SXNzdWU5NDY1NTIyNzM=
| 2,663 |
[`to_json`] add multi-proc sharding support
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T19:41:50 | 2021-09-13T13:56:37 | 2021-09-13T13:56:37 |
CONTRIBUTOR
| null | null |
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards?
I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process.
I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs.
The code I was using:
```
from multiprocessing import cpu_count, Process, Queue
[...]
filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count())
DATASET_NAME = "oscar"
SHARDS = 10
def process_shard(idx):
print(f"Sharding {idx}")
ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True)
# ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling
print(f"Saving {DATASET_NAME}-{idx}.jsonl")
ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False)
queue = Queue()
processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)]
for p in processes:
p.start()
for p in processes:
p.join()
```
Thank you!
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2663/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2662
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2662/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2662/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2662/events
|
https://github.com/huggingface/datasets/pull/2662
| 946,470,815 |
MDExOlB1bGxSZXF1ZXN0NjkxNjM5MjU5
| 2,662 |
Load Dataset from the Hub (NO DATASET SCRIPT)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T17:21:58 | 2021-08-25T14:53:01 | 2021-08-25T14:18:08 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2662",
"html_url": "https://github.com/huggingface/datasets/pull/2662",
"diff_url": "https://github.com/huggingface/datasets/pull/2662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2662.patch",
"merged_at": "2021-08-25T14:18:08"
}
|
## Load the data from any Dataset repository on the Hub
This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script.
As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. Here is an example with the `allenai/c4` repository that contains a lot of compressed json lines files:
```python
from datasets import load_dataset
data_files = {"train": "en/c4-train.*.json.gz"}
c4 = load_dataset("allenai/c4", data_files=data_files, split="train", streaming=True)
print(c4.n_shards)
# 1024
print(next(iter(c4)))
# {'text': 'Beginners BBQ Class Takin...'}
```
By default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns.
Of course it's still possible to use dataset scripts since they offer the most flexibility.
## Implementation details
It uses `huggingface_hub` to list the files in a dataset repository.
If you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`.
Depending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders.
Because of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example.
## TODO
- [x] tests
- [x] docs
- [x] when huggingface_hub gets a new release, update the CI and the setup.py
Close https://github.com/huggingface/datasets/issues/2629
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2662/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2662/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2661
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2661/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2661/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2661/events
|
https://github.com/huggingface/datasets/pull/2661
| 946,446,967 |
MDExOlB1bGxSZXF1ZXN0NjkxNjE5MzAz
| 2,661 |
Add SD task for SUPERB
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T16:43:21 | 2021-08-04T17:03:53 | 2021-08-04T17:03:53 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2661",
"html_url": "https://github.com/huggingface/datasets/pull/2661",
"diff_url": "https://github.com/huggingface/datasets/pull/2661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2661.patch",
"merged_at": "2021-08-04T17:03:52"
}
|
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
TODO:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Upload these files to the superb-data repo
- [x] Transcribe the corresponding s3prl processing of these files into our superb loading script
- [x] README: tags + description sections
- ~~Add DER metric~~ (we leave the DER metric for a follow-up PR)
Related to #2619.
Close #2653.
cc: @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2661/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2661/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2660
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2660/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2660/events
|
https://github.com/huggingface/datasets/pull/2660
| 946,316,180 |
MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0
| 2,660 |
Move checks from _map_single to map
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T13:53:33 | 2021-09-06T14:12:23 | 2021-09-06T14:12:23 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2660",
"html_url": "https://github.com/huggingface/datasets/pull/2660",
"diff_url": "https://github.com/huggingface/datasets/pull/2660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2660.patch",
"merged_at": "2021-09-06T14:12:23"
}
|
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2660/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2659
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2659/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2659/events
|
https://github.com/huggingface/datasets/pull/2659
| 946,155,407 |
MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3
| 2,659 |
Allow dataset config kwargs to be None
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T10:25:38 | 2021-07-16T12:46:07 | 2021-07-16T12:46:07 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2659",
"html_url": "https://github.com/huggingface/datasets/pull/2659",
"diff_url": "https://github.com/huggingface/datasets/pull/2659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2659.patch",
"merged_at": "2021-07-16T12:46:06"
}
|
Close https://github.com/huggingface/datasets/issues/2658
The dataset config kwargs that were set to None we simply ignored.
This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator.
cc @SBrandeis
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2659/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2658
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2658/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2658/events
|
https://github.com/huggingface/datasets/issues/2658
| 946,139,532 |
MDU6SXNzdWU5NDYxMzk1MzI=
| 2,658 |
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-16T10:05:44 | 2021-07-16T12:46:06 | 2021-07-16T12:46:06 |
MEMBER
| null | null |
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator.
Related to https://github.com/huggingface/datasets/pull/2656
cc @SBrandeis
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2658/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2657
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2657/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2657/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2657/events
|
https://github.com/huggingface/datasets/issues/2657
| 945,822,829 |
MDU6SXNzdWU5NDU4MjI4Mjk=
| 2,657 |
`to_json` reporting enhancements
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-15T23:32:18 | 2021-07-15T23:33:53 | null |
CONTRIBUTOR
| null | null |
While using `to_json` 2 things came to mind that would have made the experience easier on the user:
1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently.
2. It took me a while to make sense of the reported numbers:
```
22%|██▏ | 1536/7076 [12:30:57<44:09:42, 28.70s/it]
```
So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.:
```
22%|██▏ | 15360K/70760K [12:30:57<44:09:42, 28.70s/it]
```
or
```
22%|██▏ | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it]
```
(while of course remaining friendly to small datasets)
I forget if tqdm lets you add a magnitude identifier to the running count.
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2657/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2656
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2656/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2656/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2656/events
|
https://github.com/huggingface/datasets/pull/2656
| 945,421,790 |
MDExOlB1bGxSZXF1ZXN0NjkwNzUzNjA3
| 2,656 |
Change `from_csv` default arguments
|
{
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-15T14:09:06 | 2023-09-24T09:56:44 | 2021-07-16T10:23:26 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2656",
"html_url": "https://github.com/huggingface/datasets/pull/2656",
"diff_url": "https://github.com/huggingface/datasets/pull/2656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2656.patch",
"merged_at": null
}
|
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator
This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`:
```python
Dataset.from_csv(
...,
sep=None
)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2656/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2655
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2655/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2655/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2655/events
|
https://github.com/huggingface/datasets/issues/2655
| 945,382,723 |
MDU6SXNzdWU5NDUzODI3MjM=
| 2,655 |
Allow the selection of multiple columns at once
|
{
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-15T13:30:45 | 2024-01-09T15:11:27 | 2024-01-09T07:46:28 |
CONTRIBUTOR
| null | null |
**Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
idx, label = my_dataset[['idx', 'label']]
```
**Describe alternatives you've considered**
we can do `[dataset[col] for col in ('idx', 'label')]`
**Additional context**
This is of course very minor.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2655/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2655/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2654
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2654/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2654/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2654/events
|
https://github.com/huggingface/datasets/issues/2654
| 945,167,231 |
MDU6SXNzdWU5NDUxNjcyMzE=
| 2,654 |
Give a user feedback if the dataset he loads is streamable or not
|
{
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false |
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-15T09:07:27 | 2021-08-02T11:03:21 | null |
MEMBER
| null | null |
**Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive.
**Describe alternatives you've considered**
Add a new metadata tag for "streaming"
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2654/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2653
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2653/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2653/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2653/events
|
https://github.com/huggingface/datasets/issues/2653
| 945,102,321 |
MDU6SXNzdWU5NDUxMDIzMjE=
| 2,653 |
Add SD task for SUPERB
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-15T07:51:40 | 2021-08-04T17:03:52 | 2021-08-04T17:03:52 |
MEMBER
| null | null |
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Upload these files to the superb-data repo
- [x] Transcribe the corresponding s3prl processing of these files into our superb loading script
- [ ] README: tags + description sections
Related to #2619.
cc: @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2653/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2653/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2652
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2652/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2652/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2652/events
|
https://github.com/huggingface/datasets/pull/2652
| 944,865,924 |
MDExOlB1bGxSZXF1ZXN0NjkwMjg0MTI4
| 2,652 |
Fix logging docstring
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T23:19:58 | 2021-07-18T11:41:06 | 2021-07-15T09:57:31 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2652",
"html_url": "https://github.com/huggingface/datasets/pull/2652",
"diff_url": "https://github.com/huggingface/datasets/pull/2652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2652.patch",
"merged_at": "2021-07-15T09:57:31"
}
|
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2652/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2651
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2651/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2651/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2651/events
|
https://github.com/huggingface/datasets/issues/2651
| 944,796,961 |
MDU6SXNzdWU5NDQ3OTY5NjE=
| 2,651 |
Setting log level higher than warning does not suppress progress bar
|
{
"login": "Isa-rentacs",
"id": 1147443,
"node_id": "MDQ6VXNlcjExNDc0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1147443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isa-rentacs",
"html_url": "https://github.com/Isa-rentacs",
"followers_url": "https://api.github.com/users/Isa-rentacs/followers",
"following_url": "https://api.github.com/users/Isa-rentacs/following{/other_user}",
"gists_url": "https://api.github.com/users/Isa-rentacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isa-rentacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isa-rentacs/subscriptions",
"organizations_url": "https://api.github.com/users/Isa-rentacs/orgs",
"repos_url": "https://api.github.com/users/Isa-rentacs/repos",
"events_url": "https://api.github.com/users/Isa-rentacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isa-rentacs/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T21:06:51 | 2022-07-08T14:51:57 | 2021-07-15T03:41:35 |
NONE
| null | null |
## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work.
## Steps to reproduce the bug
```python
import datasets
from datasets.utils.logging import set_verbosity_error
set_verbosity_error()
def dummy_map(batch):
return batch
common_voice_train = datasets.load_dataset("common_voice", "de", split="train")
common_voice_test = datasets.load_dataset("common_voice", "de", split="test")
common_voice_train.map(dummy_map)
```
## Expected results
- The progress bar for `.map` call won't be shown
## Actual results
- The progress bar for `.map` is still shown
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2651/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2651/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2650/events
|
https://github.com/huggingface/datasets/issues/2650
| 944,672,565 |
MDU6SXNzdWU5NDQ2NzI1NjU=
| 2,650 |
[load_dataset] shard and parallelize the process
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T18:04:58 | 2023-11-28T19:11:41 | 2023-11-28T19:11:40 |
CONTRIBUTOR
| null | null |
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core.
- If the build crashes, everything done up to that point gets lost
Request: Shard the build over multiple arrow files, which would enable:
- much faster build by parallelizing the build process
- if the process crashed, the completed arrow files don't need to be re-built again
Thank you!
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2650/reactions",
"total_count": 10,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 3
}
|
https://api.github.com/repos/huggingface/datasets/issues/2650/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2649/events
|
https://github.com/huggingface/datasets/issues/2649
| 944,651,229 |
MDU6SXNzdWU5NDQ2NTEyMjk=
| 2,649 |
adding progress bar / ETA for `load_dataset`
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T17:34:39 | 2023-03-27T10:32:49 | null |
CONTRIBUTOR
| null | null |
Please consider:
```
Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2...
HF google storage unreachable. Downloading and preparing it from source
```
and no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time.
I think for this particular job it sat for 30min in total silence and then after 30min it started generating:
```
897850 examples [07:24, 10286.71 examples/s]
```
which is already great!
Request:
1. ETA - knowing how many hours to allocate for a slurm job
2. progress bar - helps to know things are working and aren't stuck and where we are at.
Thank you!
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2649/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2649/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2648
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2648/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2648/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2648/events
|
https://github.com/huggingface/datasets/issues/2648
| 944,484,522 |
MDU6SXNzdWU5NDQ0ODQ1MjI=
| 2,648 |
Add web_split dataset for Paraphase and Rephrase benchmark
|
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false |
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T14:24:36 | 2021-07-14T14:26:12 | null |
CONTRIBUTOR
| null | null |
## Describe:
For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data.
This dataset is made from web NLG data.
All the dataset related details are provided in the below repository
Github link: https://github.com/shashiongithub/Split-and-Rephrase
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2648/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2648/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2647/events
|
https://github.com/huggingface/datasets/pull/2647
| 944,424,941 |
MDExOlB1bGxSZXF1ZXN0Njg5OTExMzky
| 2,647 |
Fix anchor in README
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T13:22:44 | 2021-07-18T11:41:18 | 2021-07-15T06:50:47 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2647",
"html_url": "https://github.com/huggingface/datasets/pull/2647",
"diff_url": "https://github.com/huggingface/datasets/pull/2647.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2647.patch",
"merged_at": "2021-07-15T06:50:47"
}
|
I forgot to push this fix in #2611, so I'm sending it now.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2647/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2646
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2646/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2646/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2646/events
|
https://github.com/huggingface/datasets/issues/2646
| 944,379,954 |
MDU6SXNzdWU5NDQzNzk5NTQ=
| 2,646 |
downloading of yahoo_answers_topics dataset failed
|
{
"login": "vikrant7k",
"id": 66781249,
"node_id": "MDQ6VXNlcjY2NzgxMjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/66781249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikrant7k",
"html_url": "https://github.com/vikrant7k",
"followers_url": "https://api.github.com/users/vikrant7k/followers",
"following_url": "https://api.github.com/users/vikrant7k/following{/other_user}",
"gists_url": "https://api.github.com/users/vikrant7k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikrant7k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikrant7k/subscriptions",
"organizations_url": "https://api.github.com/users/vikrant7k/orgs",
"repos_url": "https://api.github.com/users/vikrant7k/repos",
"events_url": "https://api.github.com/users/vikrant7k/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikrant7k/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T12:31:05 | 2022-08-04T08:28:24 | 2022-08-04T08:28:24 |
NONE
| null | null |
## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
# Sample code to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2646/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2645
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2645/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2645/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2645/events
|
https://github.com/huggingface/datasets/issues/2645
| 944,374,284 |
MDU6SXNzdWU5NDQzNzQyODQ=
| 2,645 |
load_dataset processing failed with OS error after downloading a dataset
|
{
"login": "fake-warrior8",
"id": 40395156,
"node_id": "MDQ6VXNlcjQwMzk1MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/40395156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fake-warrior8",
"html_url": "https://github.com/fake-warrior8",
"followers_url": "https://api.github.com/users/fake-warrior8/followers",
"following_url": "https://api.github.com/users/fake-warrior8/following{/other_user}",
"gists_url": "https://api.github.com/users/fake-warrior8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fake-warrior8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fake-warrior8/subscriptions",
"organizations_url": "https://api.github.com/users/fake-warrior8/orgs",
"repos_url": "https://api.github.com/users/fake-warrior8/repos",
"events_url": "https://api.github.com/users/fake-warrior8/events{/privacy}",
"received_events_url": "https://api.github.com/users/fake-warrior8/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T12:23:53 | 2021-07-15T09:34:02 | 2021-07-15T09:34:02 |
NONE
| null | null |
## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets import load_dataset
this_dataset = load_dataset('opus100', 'af-en')
```
## Expected results
there is no error when running load_dataset.
## Actual results
Specify the actual results or traceback.
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example
example = cast_to_python_objects(example)
File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob
return _cast_to_python_objects(obj)[0]
File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o
import torch
File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module>
_load_global_deps()
File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__
self._handle = _dlopen(self._name, mode)
OSError: dlopen: cannot load any more object with static TLS
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "download_hub_opus100.py", line 9, in <module>
this_dataset = load_dataset('opus100', language_pair)
File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset
use_auth_token=use_auth_token,
File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep
+ str(e)
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid
- Python version: 3.6.6
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2645/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2644
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2644/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2644/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2644/events
|
https://github.com/huggingface/datasets/issues/2644
| 944,254,748 |
MDU6SXNzdWU5NDQyNTQ3NDg=
| 2,644 |
Batched `map` not allowed to return 0 items
|
{
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T09:58:19 | 2021-07-26T14:55:15 | 2021-07-26T14:55:15 |
MEMBER
| null | null |
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`.
However, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L100), but no elements were returned in this case.
For this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs.
## Steps to reproduce the bug
```python
def select_rows(examples):
# `key` is a column name that exists in the original dataset
# The following line simulates no matches found, so we return an empty batch
result = {'key': []}
return result
filtered_dataset = dataset.map(
select_rows,
remove_columns = dataset.column_names,
batched = True,
num_proc = 1,
desc = "Selecting rows with images that exist"
)
```
The code above immediately triggers the exception. If we use the following instead:
```python
def select_rows(examples):
# `key` is a column name that exists in the original dataset
result = {'key': []} # or defaultdict or whatever
# code to check for condition and append elements to result
# some_items_found will be set to True if there were any matching elements in the batch
return result if some_items_found else {}
```
Then it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it.
In my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items.
## Expected results
The dataset would be filtered and only the matching fields would be returned.
## Actual results
An exception is encountered, as described. Using a workaround makes it fail further along the line.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.1.dev0
- Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2644/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2643
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2643/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2643/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2643/events
|
https://github.com/huggingface/datasets/issues/2643
| 944,220,273 |
MDU6SXNzdWU5NDQyMjAyNzM=
| 2,643 |
Enum used in map functions will raise a RecursionError with dill.
|
{
"login": "jorgeecardona",
"id": 100702,
"node_id": "MDQ6VXNlcjEwMDcwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/100702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgeecardona",
"html_url": "https://github.com/jorgeecardona",
"followers_url": "https://api.github.com/users/jorgeecardona/followers",
"following_url": "https://api.github.com/users/jorgeecardona/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgeecardona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorgeecardona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgeecardona/subscriptions",
"organizations_url": "https://api.github.com/users/jorgeecardona/orgs",
"repos_url": "https://api.github.com/users/jorgeecardona/repos",
"events_url": "https://api.github.com/users/jorgeecardona/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorgeecardona/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T09:16:08 | 2021-11-02T09:51:11 | null |
NONE
| null | null |
## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from enum import Enum
class A(Enum):
a = 'a'
def main():
a = A.a
def f(x):
return {} if a == a.a else x
ds = load_dataset('cnn_dailymail', '3.0.0')['test']
ds = ds.map(f, num_proc=15)
if __name__ == "__main__":
main()
```
## Expected results
The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood.
## Actual results
```python
File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type
pickler.save_reduce(_create_type, (type(obj), obj.__name__,
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save
self.framer.commit_frame()
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame
if f.tell() >= self._FRAME_SIZE_TARGET or force:
RecursionError: maximum recursion depth exceeded while calling a Python object
```
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2643/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2642
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2642/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2642/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2642/events
|
https://github.com/huggingface/datasets/issues/2642
| 944,175,697 |
MDU6SXNzdWU5NDQxNzU2OTc=
| 2,642 |
Support multi-worker with streaming dataset (IterableDataset).
|
{
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-14T08:22:58 | 2024-05-03T10:11:04 | null |
CONTRIBUTOR
| null | null |
**Is your feature request related to a problem? Please describe.**
The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).
**Describe the solution you'd like**
Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`.
**Describe alternatives you've considered**
A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size.
* https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset
**Additional context**
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2642/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2642/timeline
| null | false |
https://api.github.com/repos/huggingface/datasets/issues/2641
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2641/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2641/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2641/events
|
https://github.com/huggingface/datasets/issues/2641
| 943,838,085 |
MDU6SXNzdWU5NDM4MzgwODU=
| 2,641 |
load_dataset("financial_phrasebank") NonMatchingChecksumError
|
{
"login": "courtmckay",
"id": 13956255,
"node_id": "MDQ6VXNlcjEzOTU2MjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/13956255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/courtmckay",
"html_url": "https://github.com/courtmckay",
"followers_url": "https://api.github.com/users/courtmckay/followers",
"following_url": "https://api.github.com/users/courtmckay/following{/other_user}",
"gists_url": "https://api.github.com/users/courtmckay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/courtmckay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/courtmckay/subscriptions",
"organizations_url": "https://api.github.com/users/courtmckay/orgs",
"repos_url": "https://api.github.com/users/courtmckay/repos",
"events_url": "https://api.github.com/users/courtmckay/events{/privacy}",
"received_events_url": "https://api.github.com/users/courtmckay/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T21:21:49 | 2022-08-04T08:30:08 | 2022-08-04T08:30:08 |
NONE
| null | null |
## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financial_phrasebank dataset downloaded successfully
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip']
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6
- Python version: 3.7.10
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2641/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2640
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2640/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2640/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2640/events
|
https://github.com/huggingface/datasets/pull/2640
| 943,591,055 |
MDExOlB1bGxSZXF1ZXN0Njg5MjAxMDkw
| 2,640 |
Fix docstrings
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T16:09:14 | 2021-07-15T06:51:01 | 2021-07-15T06:06:12 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2640",
"html_url": "https://github.com/huggingface/datasets/pull/2640",
"diff_url": "https://github.com/huggingface/datasets/pull/2640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2640.patch",
"merged_at": "2021-07-15T06:06:12"
}
|
Fix rendering of some docstrings.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2640/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2639
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2639/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2639/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2639/events
|
https://github.com/huggingface/datasets/pull/2639
| 943,527,463 |
MDExOlB1bGxSZXF1ZXN0Njg5MTQ3NDE5
| 2,639 |
Refactor patching to specific submodule
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T15:08:45 | 2021-07-13T16:52:49 | 2021-07-13T16:52:49 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2639",
"html_url": "https://github.com/huggingface/datasets/pull/2639",
"diff_url": "https://github.com/huggingface/datasets/pull/2639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2639.patch",
"merged_at": "2021-07-13T16:52:48"
}
|
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created.
In relation with the initial approach followed in #2631.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2639/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2638
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2638/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2638/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2638/events
|
https://github.com/huggingface/datasets/pull/2638
| 943,484,913 |
MDExOlB1bGxSZXF1ZXN0Njg5MTA5NTg1
| 2,638 |
Streaming for the Json loader
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T14:37:06 | 2021-07-16T15:59:32 | 2021-07-16T15:59:31 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2638",
"html_url": "https://github.com/huggingface/datasets/pull/2638",
"diff_url": "https://github.com/huggingface/datasets/pull/2638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2638.patch",
"merged_at": "2021-07-16T15:59:31"
}
|
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows.
Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573).
So I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical.
Instead, I'm using the classical `json.loads` from the standard library.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2638/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2636
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2636/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2636/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2636/events
|
https://github.com/huggingface/datasets/pull/2636
| 943,044,514 |
MDExOlB1bGxSZXF1ZXN0Njg4NzEyMTY4
| 2,636 |
Streaming for the Pandas loader
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T09:18:21 | 2021-07-13T14:37:24 | 2021-07-13T14:37:23 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2636",
"html_url": "https://github.com/huggingface/datasets/pull/2636",
"diff_url": "https://github.com/huggingface/datasets/pull/2636.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2636.patch",
"merged_at": "2021-07-13T14:37:23"
}
|
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example.
Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2636/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2635
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2635/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2635/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2635/events
|
https://github.com/huggingface/datasets/pull/2635
| 943,030,999 |
MDExOlB1bGxSZXF1ZXN0Njg4Njk5OTM5
| 2,635 |
Streaming for the CSV loader
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T09:08:58 | 2021-07-13T15:19:38 | 2021-07-13T15:19:37 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2635",
"html_url": "https://github.com/huggingface/datasets/pull/2635",
"diff_url": "https://github.com/huggingface/datasets/pull/2635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2635.patch",
"merged_at": "2021-07-13T15:19:37"
}
|
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows.
Indeed, when streaming, `open` is extended to support reading from remote file progressively.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2635/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2634
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2634/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2634/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2634/events
|
https://github.com/huggingface/datasets/pull/2634
| 942,805,621 |
MDExOlB1bGxSZXF1ZXN0Njg4NDk2Mzc2
| 2,634 |
Inject ASR template for lj_speech dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-13T06:04:54 | 2021-07-13T09:05:09 | 2021-07-13T09:05:09 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2634",
"html_url": "https://github.com/huggingface/datasets/pull/2634",
"diff_url": "https://github.com/huggingface/datasets/pull/2634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2634.patch",
"merged_at": "2021-07-13T09:05:09"
}
|
Related to: #2565, #2633.
cc: @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2634/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2634/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2633
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2633/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2633/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2633/events
|
https://github.com/huggingface/datasets/pull/2633
| 942,396,414 |
MDExOlB1bGxSZXF1ZXN0Njg4MTMwOTA5
| 2,633 |
Update ASR tags
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T19:58:31 | 2021-07-13T05:45:26 | 2021-07-13T05:45:13 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2633",
"html_url": "https://github.com/huggingface/datasets/pull/2633",
"diff_url": "https://github.com/huggingface/datasets/pull/2633.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2633.patch",
"merged_at": "2021-07-13T05:45:13"
}
|
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2633/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2632
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2632/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2632/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2632/events
|
https://github.com/huggingface/datasets/pull/2632
| 942,293,727 |
MDExOlB1bGxSZXF1ZXN0Njg4MDQyMjcw
| 2,632 |
add image-classification task template
|
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T17:41:03 | 2021-07-13T15:44:28 | 2021-07-13T15:28:16 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2632",
"html_url": "https://github.com/huggingface/datasets/pull/2632",
"diff_url": "https://github.com/huggingface/datasets/pull/2632.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2632.patch",
"merged_at": "2021-07-13T15:28:15"
}
|
Snippet below is the tl;dr, but you can try it out directly here:
[](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb)
```python
from datasets import load_dataset
ds = load_dataset('nateraw/image-folder', data_files='PetImages/')
# DatasetDict({
# train: Dataset({
# features: ['file', 'labels'],
# num_rows: 23410
# })
# })
ds = ds.prepare_for_task('image-classification')
# DatasetDict({
# train: Dataset({
# features: ['image_file_path', 'labels'],
# num_rows: 23410
# })
# })
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2632/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2632/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2631
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2631/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2631/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2631/events
|
https://github.com/huggingface/datasets/pull/2631
| 942,242,271 |
MDExOlB1bGxSZXF1ZXN0Njg3OTk3MzM2
| 2,631 |
Delete extracted files when loading dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T16:39:33 | 2021-07-19T09:08:19 | 2021-07-19T09:08:19 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2631",
"html_url": "https://github.com/huggingface/datasets/pull/2631",
"diff_url": "https://github.com/huggingface/datasets/pull/2631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2631.patch",
"merged_at": "2021-07-19T09:08:18"
}
|
Close #2481, close #2604, close #2591.
cc: @stas00, @thomwolf, @BirgerMoell
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2631/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2630
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2630/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2630/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2630/events
|
https://github.com/huggingface/datasets/issues/2630
| 942,102,956 |
MDU6SXNzdWU5NDIxMDI5NTY=
| 2,630 |
Progress bars are not properly rendered in Jupyter notebook
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T14:07:13 | 2022-02-03T15:55:33 | 2022-02-03T15:55:33 |
MEMBER
| null | null |
## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress bars.
## Actual results
Simple plane progress bars.
cc: Reported by @thomwolf
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2630/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2629
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2629/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2629/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2629/events
|
https://github.com/huggingface/datasets/issues/2629
| 941,819,205 |
MDU6SXNzdWU5NDE4MTkyMDU=
| 2,629 |
Load datasets from the Hub without requiring a dataset script
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T08:45:17 | 2021-08-25T14:18:08 | 2021-08-25T14:18:08 |
MEMBER
| null | null |
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script.
Moreover I would like to be able to specify which file goes into which split using the `data_files` argument.
This feature should be compatible with private repositories and dataset streaming.
This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2629/reactions",
"total_count": 11,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 7,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2629/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2628
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2628/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2628/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2628/events
|
https://github.com/huggingface/datasets/pull/2628
| 941,676,404 |
MDExOlB1bGxSZXF1ZXN0Njg3NTE0NzQz
| 2,628 |
Use ETag of remote data files
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-12T05:10:10 | 2021-07-12T14:08:34 | 2021-07-12T08:40:07 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2628",
"html_url": "https://github.com/huggingface/datasets/pull/2628",
"diff_url": "https://github.com/huggingface/datasets/pull/2628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2628.patch",
"merged_at": "2021-07-12T08:40:07"
}
|
Use ETag of remote data files to create config ID.
Related to #2616.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2628/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2627
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2627/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2627/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2627/events
|
https://github.com/huggingface/datasets/pull/2627
| 941,503,349 |
MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1
| 2,627 |
Minor fix tests with Windows paths
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-11T17:55:48 | 2021-07-12T14:08:47 | 2021-07-12T08:34:50 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2627",
"html_url": "https://github.com/huggingface/datasets/pull/2627",
"diff_url": "https://github.com/huggingface/datasets/pull/2627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2627.patch",
"merged_at": "2021-07-12T08:34:50"
}
|
Minor fix tests with Windows paths.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2627/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2626
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2626/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2626/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2626/events
|
https://github.com/huggingface/datasets/pull/2626
| 941,497,830 |
MDExOlB1bGxSZXF1ZXN0Njg3MzY4OTMz
| 2,626 |
Use correct logger in metrics.py
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-11T17:22:30 | 2021-07-12T14:08:54 | 2021-07-12T05:54:29 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2626",
"html_url": "https://github.com/huggingface/datasets/pull/2626",
"diff_url": "https://github.com/huggingface/datasets/pull/2626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2626.patch",
"merged_at": "2021-07-12T05:54:29"
}
|
Fixes #2624
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2626/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2625
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2625/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2625/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2625/events
|
https://github.com/huggingface/datasets/issues/2625
| 941,439,922 |
MDU6SXNzdWU5NDE0Mzk5MjI=
| 2,625 |
⚛️😇⚙️🔑
|
{
"login": "hustlen0mics",
"id": 50596661,
"node_id": "MDQ6VXNlcjUwNTk2NjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/50596661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hustlen0mics",
"html_url": "https://github.com/hustlen0mics",
"followers_url": "https://api.github.com/users/hustlen0mics/followers",
"following_url": "https://api.github.com/users/hustlen0mics/following{/other_user}",
"gists_url": "https://api.github.com/users/hustlen0mics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hustlen0mics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hustlen0mics/subscriptions",
"organizations_url": "https://api.github.com/users/hustlen0mics/orgs",
"repos_url": "https://api.github.com/users/hustlen0mics/repos",
"events_url": "https://api.github.com/users/hustlen0mics/events{/privacy}",
"received_events_url": "https://api.github.com/users/hustlen0mics/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-11T12:14:34 | 2021-07-12T05:55:59 | 2021-07-12T05:55:59 |
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2625/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2625/timeline
|
completed
| false |
|
https://api.github.com/repos/huggingface/datasets/issues/2624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2624/events
|
https://github.com/huggingface/datasets/issues/2624
| 941,318,247 |
MDU6SXNzdWU5NDEzMTgyNDc=
| 2,624 |
can't set verbosity for `metric.py`
|
{
"login": "thomas-happify",
"id": 66082334,
"node_id": "MDQ6VXNlcjY2MDgyMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomas-happify",
"html_url": "https://github.com/thomas-happify",
"followers_url": "https://api.github.com/users/thomas-happify/followers",
"following_url": "https://api.github.com/users/thomas-happify/following{/other_user}",
"gists_url": "https://api.github.com/users/thomas-happify/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomas-happify/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomas-happify/subscriptions",
"organizations_url": "https://api.github.com/users/thomas-happify/orgs",
"repos_url": "https://api.github.com/users/thomas-happify/repos",
"events_url": "https://api.github.com/users/thomas-happify/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomas-happify/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-10T20:23:45 | 2021-07-12T05:54:29 | 2021-07-12T05:54:29 |
NONE
| null | null |
## Describe the bug
```
[2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock
[2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.
[2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.
[2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow
```
As you can see, `datasets` logging come from different places.
`filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected
However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/`
So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation.
I had to do
```
logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR)
```
to fully mute these messages
## Expected results
it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: tried both 1.8.0 & 1.9.0
- Platform: Ubuntu 18.04.5 LTS
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2624/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2623
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2623/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2623/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2623/events
|
https://github.com/huggingface/datasets/pull/2623
| 941,265,342 |
MDExOlB1bGxSZXF1ZXN0Njg3MTk0MjM3
| 2,623 |
[Metrics] added wiki_split metrics
|
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-10T14:51:50 | 2021-07-14T14:28:13 | 2021-07-12T22:34:31 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2623",
"html_url": "https://github.com/huggingface/datasets/pull/2623",
"diff_url": "https://github.com/huggingface/datasets/pull/2623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2623.patch",
"merged_at": "2021-07-12T22:34:31"
}
|
Fixes: #2606
This pull request adds combine metrics for the wikisplit or English sentence split task
Reviewer: @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2623/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2622
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2622/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2622/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2622/events
|
https://github.com/huggingface/datasets/issues/2622
| 941,127,785 |
MDU6SXNzdWU5NDExMjc3ODU=
| 2,622 |
Integration with AugLy
|
{
"login": "Darktex",
"id": 890615,
"node_id": "MDQ6VXNlcjg5MDYxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/890615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darktex",
"html_url": "https://github.com/Darktex",
"followers_url": "https://api.github.com/users/Darktex/followers",
"following_url": "https://api.github.com/users/Darktex/following{/other_user}",
"gists_url": "https://api.github.com/users/Darktex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darktex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darktex/subscriptions",
"organizations_url": "https://api.github.com/users/Darktex/orgs",
"repos_url": "https://api.github.com/users/Darktex/repos",
"events_url": "https://api.github.com/users/Darktex/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darktex/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-10T00:03:09 | 2023-07-20T13:18:48 | 2023-07-20T13:18:47 |
NONE
| null | null |
**Is your feature request related to a problem? Please describe.**
Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text.
It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial.
**Describe the solution you'd like**
The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader.
**Describe alternatives you've considered**
One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2622/timeline
|
not_planned
| false |
https://api.github.com/repos/huggingface/datasets/issues/2621
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2621/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2621/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2621/events
|
https://github.com/huggingface/datasets/pull/2621
| 940,916,446 |
MDExOlB1bGxSZXF1ZXN0Njg2OTE1Mzcw
| 2,621 |
Use prefix to allow exceed Windows MAX_PATH
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T16:39:53 | 2021-07-16T15:28:12 | 2021-07-16T15:28:11 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2621",
"html_url": "https://github.com/huggingface/datasets/pull/2621",
"diff_url": "https://github.com/huggingface/datasets/pull/2621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2621.patch",
"merged_at": "2021-07-16T15:28:11"
}
|
By using this prefix, you can exceed the Windows MAX_PATH limit.
See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces
Related to #2524, #2220.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2621/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2621/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2620
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2620/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2620/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2620/events
|
https://github.com/huggingface/datasets/pull/2620
| 940,893,389 |
MDExOlB1bGxSZXF1ZXN0Njg2ODk3MDky
| 2,620 |
Add speech processing tasks
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T16:07:29 | 2021-07-12T18:32:59 | 2021-07-12T17:32:02 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2620",
"html_url": "https://github.com/huggingface/datasets/pull/2620",
"diff_url": "https://github.com/huggingface/datasets/pull/2620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2620.patch",
"merged_at": "2021-07-12T17:32:02"
}
|
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category.
The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2620/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2619
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2619/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2619/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2619/events
|
https://github.com/huggingface/datasets/pull/2619
| 940,858,236 |
MDExOlB1bGxSZXF1ZXN0Njg2ODY3NDA4
| 2,619 |
Add ASR task for SUPERB
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T15:19:45 | 2021-07-15T08:55:58 | 2021-07-13T12:40:18 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2619",
"html_url": "https://github.com/huggingface/datasets/pull/2619",
"diff_url": "https://github.com/huggingface/datasets/pull/2619.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2619.patch",
"merged_at": "2021-07-13T12:40:18"
}
|
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition).
Usage:
```python
from datasets import load_dataset
asr = load_dataset("superb", "asr")
# DatasetDict({
# train: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 28539
# })
# validation: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2703
# })
# test: Dataset({
# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],
# num_rows: 2620
# })
# })
```
I've used the GLUE benchmark as a guide for filling out the README.
To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel.
Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2619/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2619/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2618
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2618/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2618/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2618/events
|
https://github.com/huggingface/datasets/issues/2618
| 940,852,640 |
MDU6SXNzdWU5NDA4NTI2NDA=
| 2,618 |
`filelock.py` Error
|
{
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T15:12:49 | 2024-06-21T06:14:07 | 2023-11-23T19:06:19 |
NONE
| null | null |
## Describe the bug
It seems that the `filelock.py` went error.
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
OSError: [Errno 37] No locks available
```
According to error log, it is OSError, but there is an `except` in the `_acquire` function.
```
def _acquire(self):
open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC
try:
fd = os.open(self._lock_file, open_mode)
except (IOError, OSError):
pass
else:
self._lock_file_fd = fd
return None
```
I don't know why it stucked rather than `pass` directly.
I am not quite familiar with filelock operation, so any help is highly appriciated.
## Steps to reproduce the bug
```python
ds = load_dataset('xsum')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
OSError: [Errno 37] No locks available
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset
use_auth_token=use_auth_token,
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module
with FileLock(lock_path):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
KeyboardInterrupt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2618/timeline
|
not_planned
| false |
https://api.github.com/repos/huggingface/datasets/issues/2617
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2617/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2617/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2617/events
|
https://github.com/huggingface/datasets/pull/2617
| 940,846,847 |
MDExOlB1bGxSZXF1ZXN0Njg2ODU3NzQz
| 2,617 |
Fix missing EOL issue in to_json for old versions of pandas
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T15:05:45 | 2021-07-12T14:09:00 | 2021-07-09T15:28:33 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2617",
"html_url": "https://github.com/huggingface/datasets/pull/2617",
"diff_url": "https://github.com/huggingface/datasets/pull/2617.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2617.patch",
"merged_at": "2021-07-09T15:28:33"
}
|
Some versions of pandas don't add an EOL at the end of the output of `to_json`.
Therefore users could end up having two samples in the same line
Close https://github.com/huggingface/datasets/issues/2615
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2617/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2616
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2616/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2616/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2616/events
|
https://github.com/huggingface/datasets/pull/2616
| 940,799,038 |
MDExOlB1bGxSZXF1ZXN0Njg2ODE3NjYz
| 2,616 |
Support remote data files
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T14:07:38 | 2021-07-09T16:13:41 | 2021-07-09T16:13:41 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2616",
"html_url": "https://github.com/huggingface/datasets/pull/2616",
"diff_url": "https://github.com/huggingface/datasets/pull/2616.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2616.patch",
"merged_at": "2021-07-09T16:13:41"
}
|
Add support for (streaming) remote data files:
```python
data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}"
ds = load_dataset("json", split="train", data_files=data_files, streaming=True)
```
cc: @thomwolf
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2616/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2615
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2615/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2615/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2615/events
|
https://github.com/huggingface/datasets/issues/2615
| 940,794,339 |
MDU6SXNzdWU5NDA3OTQzMzk=
| 2,615 |
Jsonlines export error
|
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T14:02:05 | 2021-07-09T15:29:07 | 2021-07-09T15:28:33 |
CONTRIBUTOR
| null | null |
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This what I'm running:
in python:
```
from datasets import load_dataset
ptb = load_dataset("ptb_text_only")
ptb["train"].to_json("ptb.jsonl")
```
then out of python:
```
head -10000 ptb.jsonl
```
## Expected results
Properly separated lines
## Actual results
The last line is a concatenation of two lines
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.1.dev0
- Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 4.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2615/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2614
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2614/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2614/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2614/events
|
https://github.com/huggingface/datasets/pull/2614
| 940,762,427 |
MDExOlB1bGxSZXF1ZXN0Njg2Nzg2NTg3
| 2,614 |
Convert numpy scalar to python float in Pearsonr output
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T13:22:55 | 2021-07-12T14:13:02 | 2021-07-09T14:04:38 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2614",
"html_url": "https://github.com/huggingface/datasets/pull/2614",
"diff_url": "https://github.com/huggingface/datasets/pull/2614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2614.patch",
"merged_at": "2021-07-09T14:04:38"
}
|
Following of https://github.com/huggingface/datasets/pull/2612
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2614/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2613
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2613/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2613/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2613/events
|
https://github.com/huggingface/datasets/pull/2613
| 940,759,852 |
MDExOlB1bGxSZXF1ZXN0Njg2Nzg0MzY0
| 2,613 |
Use ndarray.item instead of ndarray.tolist
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T13:19:35 | 2021-07-12T14:12:57 | 2021-07-09T13:50:05 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2613",
"html_url": "https://github.com/huggingface/datasets/pull/2613",
"diff_url": "https://github.com/huggingface/datasets/pull/2613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2613.patch",
"merged_at": "2021-07-09T13:50:05"
}
|
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works).
Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item
PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2613/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2612
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2612/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2612/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2612/events
|
https://github.com/huggingface/datasets/pull/2612
| 940,604,512 |
MDExOlB1bGxSZXF1ZXN0Njg2NjUwMjk3
| 2,612 |
Return Python float instead of numpy.float64 in sklearn metrics
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T09:48:09 | 2021-07-12T14:12:53 | 2021-07-09T13:03:54 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2612",
"html_url": "https://github.com/huggingface/datasets/pull/2612",
"diff_url": "https://github.com/huggingface/datasets/pull/2612.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2612.patch",
"merged_at": "2021-07-09T13:03:54"
}
|
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2612/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2611/events
|
https://github.com/huggingface/datasets/pull/2611
| 940,307,053 |
MDExOlB1bGxSZXF1ZXN0Njg2Mzk5MjU3
| 2,611 |
More consistent naming
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-09T00:09:17 | 2021-07-13T17:13:19 | 2021-07-13T16:08:30 |
COLLABORATOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2611",
"html_url": "https://github.com/huggingface/datasets/pull/2611",
"diff_url": "https://github.com/huggingface/datasets/pull/2611.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2611.patch",
"merged_at": "2021-07-13T16:08:30"
}
|
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2611/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2611/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2610
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2610/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2610/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2610/events
|
https://github.com/huggingface/datasets/pull/2610
| 939,899,829 |
MDExOlB1bGxSZXF1ZXN0Njg2MDUwMzI5
| 2,610 |
Add missing WikiANN language tags
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-08T14:08:01 | 2021-07-12T14:12:16 | 2021-07-08T15:44:04 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2610",
"html_url": "https://github.com/huggingface/datasets/pull/2610",
"diff_url": "https://github.com/huggingface/datasets/pull/2610.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2610.patch",
"merged_at": "2021-07-08T15:44:04"
}
|
Add missing language tags for WikiANN datasets.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2610/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2609
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2609/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2609/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2609/events
|
https://github.com/huggingface/datasets/pull/2609
| 939,616,682 |
MDExOlB1bGxSZXF1ZXN0Njg1ODA3MTMz
| 2,609 |
Fix potential DuplicatedKeysError
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-08T08:38:04 | 2021-07-12T14:13:16 | 2021-07-09T16:42:08 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2609",
"html_url": "https://github.com/huggingface/datasets/pull/2609",
"diff_url": "https://github.com/huggingface/datasets/pull/2609.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2609.patch",
"merged_at": "2021-07-09T16:42:08"
}
|
Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2609/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2608
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2608/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2608/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2608/events
|
https://github.com/huggingface/datasets/pull/2608
| 938,897,626 |
MDExOlB1bGxSZXF1ZXN0Njg1MjAwMDYw
| 2,608 |
Support streaming JSON files
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T13:30:22 | 2021-07-12T14:12:31 | 2021-07-08T16:08:41 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2608",
"html_url": "https://github.com/huggingface/datasets/pull/2608",
"diff_url": "https://github.com/huggingface/datasets/pull/2608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2608.patch",
"merged_at": "2021-07-08T16:08:40"
}
|
Use open in JSON dataset builder, so that it can be patched with xopen for streaming.
Close #2607.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2608/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2607
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2607/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2607/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2607/events
|
https://github.com/huggingface/datasets/issues/2607
| 938,796,902 |
MDU6SXNzdWU5Mzg3OTY5MDI=
| 2,607 |
Streaming local gzip compressed JSON line files is not working
|
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T11:36:33 | 2021-07-20T09:50:19 | 2021-07-08T16:08:41 |
MEMBER
| null | null |
## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
next(iter(streamed_dataset))
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-27a664e29784> in <module>
----> 1 next(iter(streamed_dataset))
~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self)
336
337 def __iter__(self):
--> 338 for key, example in self._iter():
339 if self.features:
340 # we encode the example for ClassLabel feature types for example
~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in _iter(self)
333 else:
334 ex_iterable = self._ex_iterable
--> 335 yield from ex_iterable
336
337 def __iter__(self):
~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in wrapper(**kwargs)
282 def wrapper(**kwargs):
283 python_formatter = PythonFormatter()
--> 284 for key, table in generate_tables_fn(**kwargs):
285 batch = python_formatter.format_batch(table)
286 for i, example in enumerate(_batch_to_examples(batch)):
~/Documents/GitHub/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files, original_files)
85 file,
86 read_options=self.config.pa_read_options,
---> 87 parse_options=self.config.pa_parse_options,
88 )
89 except pa.ArrowInvalid as err:
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json._get_reader()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_input_stream()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_native_file()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile.__cinit__()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile._open_readable()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'gzip://file-000000000000.json::/Users/thomwolf/github-dataset/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory
```
## Environment info
- `datasets` version: 1.9.1.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyArrow version: 1.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2607/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2606
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2606/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2606/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2606/events
|
https://github.com/huggingface/datasets/issues/2606
| 938,763,684 |
MDU6SXNzdWU5Mzg3NjM2ODQ=
| 2,606 |
[Metrics] addition of wiki_split metrics
|
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2459308248,
"node_id": "MDU6TGFiZWwyNDU5MzA4MjQ4",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20request",
"name": "metric request",
"color": "d4c5f9",
"default": false,
"description": "Requesting to add a new metric"
}
] |
closed
| false |
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T10:56:04 | 2021-07-12T22:34:31 | 2021-07-12T22:34:31 |
CONTRIBUTOR
| null | null |
**Is your feature request related to a problem? Please describe.**
While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score
like this

While training we require metrics which can give all the output
Currently, we don't have an exact match for text normalized data
**Describe the solution you'd like**
A custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary
For exact match, we can refer to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py)
**Describe alternatives you've considered**
Two metrics are already present one more can be added for an exact match then we can run all three metrics in training script
#self-assign
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2606/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2605
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2605/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2605/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2605/events
|
https://github.com/huggingface/datasets/pull/2605
| 938,648,164 |
MDExOlB1bGxSZXF1ZXN0Njg0OTkyODIz
| 2,605 |
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T08:47:23 | 2021-07-12T14:10:27 | 2021-07-07T08:59:13 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2605",
"html_url": "https://github.com/huggingface/datasets/pull/2605",
"diff_url": "https://github.com/huggingface/datasets/pull/2605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2605.patch",
"merged_at": "2021-07-07T08:59:13"
}
|
During the FLAX sprint some users have this error when streaming datasets:
```python
aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer
```
This error must trigger a retry instead of directly crashing
Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError`
In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2605/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2604
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2604/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2604/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2604/events
|
https://github.com/huggingface/datasets/issues/2604
| 938,602,237 |
MDU6SXNzdWU5Mzg2MDIyMzc=
| 2,604 |
Add option to delete temporary files (e.g. extracted files) when loading dataset
|
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T07:56:16 | 2021-07-19T09:08:18 | 2021-07-19T09:08:18 |
MEMBER
| null | null |
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter.
I can maybe tackle this one in the JSON script unless you want a more general solution.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2604/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2603
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2603/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2603/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2603/events
|
https://github.com/huggingface/datasets/pull/2603
| 938,588,149 |
MDExOlB1bGxSZXF1ZXN0Njg0OTQ0ODcz
| 2,603 |
Fix DuplicatedKeysError in omp
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T07:38:32 | 2021-07-12T14:10:41 | 2021-07-07T12:56:35 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2603",
"html_url": "https://github.com/huggingface/datasets/pull/2603",
"diff_url": "https://github.com/huggingface/datasets/pull/2603.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2603.patch",
"merged_at": "2021-07-07T12:56:35"
}
|
Close #2598.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2603/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2602
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2602/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2602/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2602/events
|
https://github.com/huggingface/datasets/pull/2602
| 938,555,712 |
MDExOlB1bGxSZXF1ZXN0Njg0OTE5MjMy
| 2,602 |
Remove import of transformers
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-07T06:58:18 | 2021-07-12T14:10:22 | 2021-07-07T08:28:51 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2602",
"html_url": "https://github.com/huggingface/datasets/pull/2602",
"diff_url": "https://github.com/huggingface/datasets/pull/2602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2602.patch",
"merged_at": "2021-07-07T08:28:51"
}
|
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers.
Related to huggingface/transformers#12549 and #502.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2602/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2601
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2601/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2601/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2601/events
|
https://github.com/huggingface/datasets/pull/2601
| 938,096,396 |
MDExOlB1bGxSZXF1ZXN0Njg0NTQyNjY5
| 2,601 |
Fix `filter` with multiprocessing in case all samples are discarded
|
{
"login": "mxschmdt",
"id": 4904985,
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxschmdt",
"html_url": "https://github.com/mxschmdt",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T17:06:28 | 2021-07-12T14:10:35 | 2021-07-07T12:50:31 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2601",
"html_url": "https://github.com/huggingface/datasets/pull/2601",
"diff_url": "https://github.com/huggingface/datasets/pull/2601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2601.patch",
"merged_at": "2021-07-07T12:50:31"
}
|
Fixes #2600
Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2601/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2600
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2600/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2600/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2600/events
|
https://github.com/huggingface/datasets/issues/2600
| 938,086,745 |
MDU6SXNzdWU5MzgwODY3NDU=
| 2,600 |
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
|
{
"login": "mxschmdt",
"id": 4904985,
"node_id": "MDQ6VXNlcjQ5MDQ5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxschmdt",
"html_url": "https://github.com/mxschmdt",
"followers_url": "https://api.github.com/users/mxschmdt/followers",
"following_url": "https://api.github.com/users/mxschmdt/following{/other_user}",
"gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions",
"organizations_url": "https://api.github.com/users/mxschmdt/orgs",
"repos_url": "https://api.github.com/users/mxschmdt/repos",
"events_url": "https://api.github.com/users/mxschmdt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxschmdt/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T16:53:25 | 2021-07-07T12:50:31 | 2021-07-07T12:50:31 |
CONTRIBUTOR
| null | null |
## Describe the bug
If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes.
## Steps to reproduce the bug
```python
from datasets import Dataset
data = Dataset.from_dict({'id': [0,1]})
data.filter(lambda x: False, num_proc=2)
```
## Expected results
An empty table should be returned without crashing.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter
return self.map(
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map
result = concatenate_datasets(transformed_shards)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets
table = concat_tables(tables_to_concat, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables
blocks = to_blocks(tables[0])
IndexError: list index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.10
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2600/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2599
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2599/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2599/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2599/events
|
https://github.com/huggingface/datasets/pull/2599
| 937,980,229 |
MDExOlB1bGxSZXF1ZXN0Njg0NDQ2MTYx
| 2,599 |
Update processing.rst with other export formats
|
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T14:50:38 | 2021-07-12T14:10:16 | 2021-07-07T08:05:48 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2599",
"html_url": "https://github.com/huggingface/datasets/pull/2599",
"diff_url": "https://github.com/huggingface/datasets/pull/2599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2599.patch",
"merged_at": "2021-07-07T08:05:48"
}
|
Add other supported export formats than CSV in the docs.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2599/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2598
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2598/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2598/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2598/events
|
https://github.com/huggingface/datasets/issues/2598
| 937,930,632 |
MDU6SXNzdWU5Mzc5MzA2MzI=
| 2,598 |
Unable to download omp dataset
|
{
"login": "erikadistefano",
"id": 25797960,
"node_id": "MDQ6VXNlcjI1Nzk3OTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/25797960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikadistefano",
"html_url": "https://github.com/erikadistefano",
"followers_url": "https://api.github.com/users/erikadistefano/followers",
"following_url": "https://api.github.com/users/erikadistefano/following{/other_user}",
"gists_url": "https://api.github.com/users/erikadistefano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikadistefano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikadistefano/subscriptions",
"organizations_url": "https://api.github.com/users/erikadistefano/orgs",
"repos_url": "https://api.github.com/users/erikadistefano/repos",
"events_url": "https://api.github.com/users/erikadistefano/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikadistefano/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T14:00:52 | 2021-07-07T12:56:35 | 2021-07-07T12:56:35 |
NONE
| null | null |
## Describe the bug
The omp dataset cannot be downloaded because of a DuplicatedKeysError
## Steps to reproduce the bug
from datasets import load_dataset
omp = load_dataset('omp', 'posts_labeled')
print(omp)
## Expected results
This code should download the omp dataset and print the dictionary
## Actual results
Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b...
0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split
writer.write(example, key)
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write
self.check_duplicate_keys()
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 3326
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "hf_datasets.py", line 32, in <module>
omp = load_dataset('omp', 'posts_labeled')
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset
use_auth_token=use_auth_token,
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize
self.check_duplicate_keys()
File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 3326
Keys should be unique and deterministic in nature
## Environment info
- `datasets` version: 1.8.0
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.6.9
- PyArrow version: 3.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2598/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2597
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2597/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2597/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2597/events
|
https://github.com/huggingface/datasets/pull/2597
| 937,917,770 |
MDExOlB1bGxSZXF1ZXN0Njg0Mzk0MDIz
| 2,597 |
Remove redundant prepare_module
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T13:47:45 | 2021-07-12T14:10:52 | 2021-07-07T13:01:46 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2597",
"html_url": "https://github.com/huggingface/datasets/pull/2597",
"diff_url": "https://github.com/huggingface/datasets/pull/2597.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2597.patch",
"merged_at": "2021-07-07T13:01:46"
}
|
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2597/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2596
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2596/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2596/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2596/events
|
https://github.com/huggingface/datasets/issues/2596
| 937,598,914 |
MDU6SXNzdWU5Mzc1OTg5MTQ=
| 2,596 |
Transformer Class on dataset
|
{
"login": "arita37",
"id": 18707623,
"node_id": "MDQ6VXNlcjE4NzA3NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/18707623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arita37",
"html_url": "https://github.com/arita37",
"followers_url": "https://api.github.com/users/arita37/followers",
"following_url": "https://api.github.com/users/arita37/following{/other_user}",
"gists_url": "https://api.github.com/users/arita37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arita37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arita37/subscriptions",
"organizations_url": "https://api.github.com/users/arita37/orgs",
"repos_url": "https://api.github.com/users/arita37/repos",
"events_url": "https://api.github.com/users/arita37/events{/privacy}",
"received_events_url": "https://api.github.com/users/arita37/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T07:27:15 | 2022-11-02T14:26:09 | 2022-11-02T14:26:09 |
NONE
| null | null |
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2596/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2595
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2595/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2595/events
|
https://github.com/huggingface/datasets/issues/2595
| 937,483,120 |
MDU6SXNzdWU5Mzc0ODMxMjA=
| 2,595 |
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
|
{
"login": "profsatwinder",
"id": 41314912,
"node_id": "MDQ6VXNlcjQxMzE0OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/41314912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/profsatwinder",
"html_url": "https://github.com/profsatwinder",
"followers_url": "https://api.github.com/users/profsatwinder/followers",
"following_url": "https://api.github.com/users/profsatwinder/following{/other_user}",
"gists_url": "https://api.github.com/users/profsatwinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/profsatwinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/profsatwinder/subscriptions",
"organizations_url": "https://api.github.com/users/profsatwinder/orgs",
"repos_url": "https://api.github.com/users/profsatwinder/repos",
"events_url": "https://api.github.com/users/profsatwinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/profsatwinder/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-06T03:20:55 | 2021-07-06T05:59:49 | 2021-07-06T05:59:49 |
NONE
| null | null |
Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation")
4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test")
9 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>()
19
20 import datasets
---> 21 from datasets.tasks import AutomaticSpeechRecognition
22
23
ModuleNotFoundError: No module named 'datasets.tasks'
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2595/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2594
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2594/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2594/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2594/events
|
https://github.com/huggingface/datasets/pull/2594
| 937,294,772 |
MDExOlB1bGxSZXF1ZXN0NjgzODc0NjIz
| 2,594 |
Fix BibTeX entry
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T18:24:10 | 2021-07-06T04:59:38 | 2021-07-06T04:59:38 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2594",
"html_url": "https://github.com/huggingface/datasets/pull/2594",
"diff_url": "https://github.com/huggingface/datasets/pull/2594.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2594.patch",
"merged_at": "2021-07-06T04:59:38"
}
|
Fix BibTeX entry.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2594/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2593
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2593/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2593/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2593/events
|
https://github.com/huggingface/datasets/pull/2593
| 937,242,137 |
MDExOlB1bGxSZXF1ZXN0NjgzODMwMjcy
| 2,593 |
Support pandas 1.3.0 read_csv
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T16:40:04 | 2021-07-05T17:14:14 | 2021-07-05T17:14:14 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2593",
"html_url": "https://github.com/huggingface/datasets/pull/2593",
"diff_url": "https://github.com/huggingface/datasets/pull/2593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2593.patch",
"merged_at": "2021-07-05T17:14:14"
}
|
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387
The csv reader raises an error:
```python
/usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults)
1304
1305 if names is not lib.no_default and prefix is not lib.no_default:
-> 1306 raise ValueError("Specified named and prefix; you can only specify one.")
1307
1308 kwds["names"] = None if names is lib.no_default else names
ValueError: Specified named and prefix; you can only specify one.
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2593/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2592
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2592/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2592/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2592/events
|
https://github.com/huggingface/datasets/pull/2592
| 937,060,559 |
MDExOlB1bGxSZXF1ZXN0NjgzNjc2MjA4
| 2,592 |
Add c4.noclean infos
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T12:51:40 | 2021-07-05T13:15:53 | 2021-07-05T13:15:52 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2592",
"html_url": "https://github.com/huggingface/datasets/pull/2592",
"diff_url": "https://github.com/huggingface/datasets/pull/2592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2592.patch",
"merged_at": "2021-07-05T13:15:52"
}
|
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2592/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2591
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2591/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2591/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2591/events
|
https://github.com/huggingface/datasets/issues/2591
| 936,957,975 |
MDU6SXNzdWU5MzY5NTc5NzU=
| 2,591 |
Cached dataset overflowing disk space
|
{
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T10:43:19 | 2021-07-19T09:08:19 | 2021-07-19T09:08:19 |
CONTRIBUTOR
| null | null |
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files).
This might not technically be a bug, but I was unsure and I felt that the bug was the closest one.
Traceback (most recent call last):
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single
writer.finalize()
File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize
self.pa_writer.close()
File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close
File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status
OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
"""
The above exception was the direct cause of the following exception:
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2591/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2590
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2590/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2590/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2590/events
|
https://github.com/huggingface/datasets/pull/2590
| 936,954,348 |
MDExOlB1bGxSZXF1ZXN0NjgzNTg1MDg2
| 2,590 |
Add language tags
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T10:39:57 | 2021-07-05T10:58:48 | 2021-07-05T10:58:48 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2590",
"html_url": "https://github.com/huggingface/datasets/pull/2590",
"diff_url": "https://github.com/huggingface/datasets/pull/2590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2590.patch",
"merged_at": "2021-07-05T10:58:48"
}
|
This PR adds some missing language tags needed for ASR datasets in #2565
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2590/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2589
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2589/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2589/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2589/events
|
https://github.com/huggingface/datasets/pull/2589
| 936,825,060 |
MDExOlB1bGxSZXF1ZXN0NjgzNDc0OTQ0
| 2,589 |
Support multilabel metrics
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T08:19:25 | 2022-07-29T10:56:25 | 2021-07-08T08:40:15 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2589",
"html_url": "https://github.com/huggingface/datasets/pull/2589",
"diff_url": "https://github.com/huggingface/datasets/pull/2589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2589.patch",
"merged_at": "2021-07-08T08:40:15"
}
|
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`.
This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed.
Close #2554.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2589/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2588
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2588/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2588/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2588/events
|
https://github.com/huggingface/datasets/pull/2588
| 936,795,541 |
MDExOlB1bGxSZXF1ZXN0NjgzNDQ5Njky
| 2,588 |
Fix test_is_small_dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T07:46:26 | 2021-07-12T14:10:11 | 2021-07-06T17:09:30 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2588",
"html_url": "https://github.com/huggingface/datasets/pull/2588",
"diff_url": "https://github.com/huggingface/datasets/pull/2588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2588.patch",
"merged_at": "2021-07-06T17:09:30"
}
|
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2588/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2587
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2587/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2587/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2587/events
|
https://github.com/huggingface/datasets/pull/2587
| 936,771,339 |
MDExOlB1bGxSZXF1ZXN0NjgzNDI5NjQy
| 2,587 |
Add aiohttp to tests extras require
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T07:14:01 | 2021-07-05T09:04:38 | 2021-07-05T09:04:38 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2587",
"html_url": "https://github.com/huggingface/datasets/pull/2587",
"diff_url": "https://github.com/huggingface/datasets/pull/2587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2587.patch",
"merged_at": "2021-07-05T09:04:38"
}
|
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies.
Our CI test suite should be exhaustive and test all the library functionalities.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2587/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2586
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2586/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2586/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2586/events
|
https://github.com/huggingface/datasets/pull/2586
| 936,747,588 |
MDExOlB1bGxSZXF1ZXN0NjgzNDEwMDU3
| 2,586 |
Fix misalignment in SQuAD
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-05T06:42:20 | 2021-07-12T14:11:10 | 2021-07-07T13:18:51 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2586",
"html_url": "https://github.com/huggingface/datasets/pull/2586",
"diff_url": "https://github.com/huggingface/datasets/pull/2586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2586.patch",
"merged_at": "2021-07-07T13:18:51"
}
|
Fix misalignment between:
- the answer text and
- the answer_start within the context
by keeping original leading blank spaces in the context.
Fix #2585.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2586/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2585
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2585/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2585/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2585/events
|
https://github.com/huggingface/datasets/issues/2585
| 936,484,419 |
MDU6SXNzdWU5MzY0ODQ0MTk=
| 2,585 |
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
|
{
"login": "mmajurski",
"id": 9354454,
"node_id": "MDQ6VXNlcjkzNTQ0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmajurski",
"html_url": "https://github.com/mmajurski",
"followers_url": "https://api.github.com/users/mmajurski/followers",
"following_url": "https://api.github.com/users/mmajurski/following{/other_user}",
"gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions",
"organizations_url": "https://api.github.com/users/mmajurski/orgs",
"repos_url": "https://api.github.com/users/mmajurski/repos",
"events_url": "https://api.github.com/users/mmajurski/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmajurski/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false |
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-04T15:39:49 | 2021-07-07T13:18:51 | 2021-07-07T13:18:51 |
NONE
| null | null |
## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start'].
For example:
id = '56d1f453e7d4791d009025bd'
answers = {'text': ['Pure Land'], 'answer_start': [146]}
However the actual text in context at location 146 is 'ure Land,'
Which is an off-by-one error from the correct answer.
## Steps to reproduce the bug
```python
import datasets
def check_context_answer_alignment(example):
for a_idx in range(len(example['answers']['text'])):
# check raw dataset for answer consistency between context and answer
answer_text = example['answers']['text'][a_idx]
a_st_idx = example['answers']['answer_start'][a_idx]
a_end_idx = a_st_idx + len(example['answers']['text'][a_idx])
answer_text_from_context = example['context'][a_st_idx:a_end_idx]
if answer_text != answer_text_from_context:
#print(example['id'])
return False
return True
dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True)
start_len = len(dataset)
dataset = dataset.filter(check_context_answer_alignment,
num_proc=1,
keep_in_memory=True)
end_len = len(dataset)
print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len))
```
## Expected results
This code should result in 0 rows being filtered out from the dataset.
## Actual results
This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location.
This code will reproduce the problem and produce the following count:
"258 instances contain mis-alignment between the answer text and answer index."
## Environment info
Steps to rebuilt the Conda environment:
```
# create a virtual environment to stuff all these packages into
conda create -n round8 python=3.8 -y
# activate the virtual environment
conda activate round8
# install pytorch (best done through conda to handle cuda dependencies)
conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia
pip install jsonpickle transformers datasets matplotlib
```
OS: Ubuntu 20.04
Python 3.8
Result of `conda env export`:
```
name: round8
channels:
- pytorch-lts
- nvidia
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- blas=1.0=mkl
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2021.5.25=h06a4308_1
- certifi=2021.5.30=py38h06a4308_0
- cffi=1.14.5=py38h261ae71_0
- chardet=4.0.0=py38h06a4308_1003
- cryptography=3.4.7=py38hd23ed53_0
- cudatoolkit=11.1.74=h6bb024c_0
- ffmpeg=4.2.2=h20bf706_0
- freetype=2.10.4=h5ab3b9f_0
- gmp=6.2.1=h2531618_2
- gnutls=3.6.15=he1e5248_0
- idna=2.10=pyhd3eb1b0_0
- intel-openmp=2021.2.0=h06a4308_610
- jpeg=9b=h024ee3a_2
- lame=3.100=h7b6447c_0
- lcms2=2.12=h3be6417_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libidn2=2.3.1=h27cfd23_0
- libopus=1.3.1=h7b6447c_0
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libtasn1=4.16.0=h27cfd23_0
- libtiff=4.2.0=h85742a9_0
- libunistring=0.9.10=h27cfd23_0
- libuv=1.40.0=h7b6447c_0
- libvpx=1.7.0=h439df22_0
- libwebp-base=1.2.0=h27cfd23_0
- lz4-c=1.9.3=h2531618_0
- mkl=2021.2.0=h06a4308_296
- mkl-service=2.3.0=py38h27cfd23_1
- mkl_fft=1.3.0=py38h42c9631_2
- mkl_random=1.2.1=py38ha9443f7_2
- ncurses=6.2=he6710b0_1
- nettle=3.7.3=hbbd107a_1
- ninja=1.10.2=hff7bd54_1
- numpy=1.20.2=py38h2d18471_0
- numpy-base=1.20.2=py38hfae3a4d_0
- olefile=0.46=py_0
- openh264=2.1.0=hd408876_0
- openssl=1.1.1k=h27cfd23_0
- pillow=8.2.0=py38he98fc37_0
- pip=21.1.2=py38h06a4308_0
- pycparser=2.20=py_2
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.10=h12debd9_8
- pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0
- readline=8.1=h27cfd23_0
- requests=2.25.1=pyhd3eb1b0_0
- setuptools=52.0.0=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- sqlite=3.35.4=hdfb4753_0
- tk=8.6.10=hbc83047_0
- torchtext=0.9.1=py38
- torchvision=0.9.1=py38_cu111
- typing_extensions=3.7.4.3=pyha847dfd_0
- urllib3=1.26.4=pyhd3eb1b0_0
- wheel=0.36.2=pyhd3eb1b0_0
- x264=1!157.20191217=h7b6447c_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
- pip:
- click==8.0.1
- cycler==0.10.0
- datasets==1.8.0
- dill==0.3.4
- filelock==3.0.12
- fsspec==2021.6.0
- huggingface-hub==0.0.8
- joblib==1.0.1
- jsonpickle==2.0.0
- kiwisolver==1.3.1
- matplotlib==3.4.2
- multiprocess==0.70.12.2
- packaging==20.9
- pandas==1.2.4
- pyarrow==3.0.0
- pyparsing==2.4.7
- python-dateutil==2.8.1
- pytz==2021.1
- regex==2021.4.4
- sacremoses==0.0.45
- tokenizers==0.10.3
- tqdm==4.49.0
- transformers==4.6.1
- xxhash==2.0.2
prefix: /home/mmajurski/anaconda3/envs/round8
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2585/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2584
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2584/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2584/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2584/events
|
https://github.com/huggingface/datasets/pull/2584
| 936,049,736 |
MDExOlB1bGxSZXF1ZXN0NjgyODY2Njc1
| 2,584 |
wi_locness: reference latest leaderboard on codalab
|
{
"login": "aseifert",
"id": 4944799,
"node_id": "MDQ6VXNlcjQ5NDQ3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aseifert",
"html_url": "https://github.com/aseifert",
"followers_url": "https://api.github.com/users/aseifert/followers",
"following_url": "https://api.github.com/users/aseifert/following{/other_user}",
"gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseifert/subscriptions",
"organizations_url": "https://api.github.com/users/aseifert/orgs",
"repos_url": "https://api.github.com/users/aseifert/repos",
"events_url": "https://api.github.com/users/aseifert/events{/privacy}",
"received_events_url": "https://api.github.com/users/aseifert/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T20:26:22 | 2021-07-05T09:06:14 | 2021-07-05T09:06:14 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2584",
"html_url": "https://github.com/huggingface/datasets/pull/2584",
"diff_url": "https://github.com/huggingface/datasets/pull/2584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2584.patch",
"merged_at": "2021-07-05T09:06:14"
}
|
The dataset's author asked me to put this codalab link into the dataset's README.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2584/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2583
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2583/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2583/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2583/events
|
https://github.com/huggingface/datasets/issues/2583
| 936,034,976 |
MDU6SXNzdWU5MzYwMzQ5NzY=
| 2,583 |
Error iteration over IterableDataset using Torch DataLoader
|
{
"login": "LeenaShekhar",
"id": 12227436,
"node_id": "MDQ6VXNlcjEyMjI3NDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/12227436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeenaShekhar",
"html_url": "https://github.com/LeenaShekhar",
"followers_url": "https://api.github.com/users/LeenaShekhar/followers",
"following_url": "https://api.github.com/users/LeenaShekhar/following{/other_user}",
"gists_url": "https://api.github.com/users/LeenaShekhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeenaShekhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeenaShekhar/subscriptions",
"organizations_url": "https://api.github.com/users/LeenaShekhar/orgs",
"repos_url": "https://api.github.com/users/LeenaShekhar/repos",
"events_url": "https://api.github.com/users/LeenaShekhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeenaShekhar/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T19:55:58 | 2021-07-20T09:04:45 | 2021-07-05T23:48:23 |
NONE
| null | null |
## Describe the bug
I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler.
I am not sure if this is how it is meant to be used, but that's what seemed reasonable to me.
## Steps to reproduce the bug
1. Does not work.
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
>>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)
>>> dataloader.sampler
<torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208>
>>> for batch in dataloader:
... print(batch)
```
2. Works.
```python
import torch
from torch.utils.data import Dataset, IterableDataset, DataLoader
class CustomIterableDataset(IterableDataset):
'Characterizes a dataset for PyTorch'
def __init__(self, data):
'Initialization'
self.data = data
def __iter__(self):
return iter(self.data)
data = list(range(12))
dataset = CustomIterableDataset(data)
dataloader = DataLoader(dataset, batch_size=4)
print("dataloader: ", dataloader.sampler)
for batch in dataloader:
print(batch)
```
## Expected results
To get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different.
dataloader: <torch.utils.data.dataloader._InfiniteConstantSampler object at 0x7f1cc29e2c50>
tensor([0, 1, 2, 3])
tensor([4, 5, 6, 7])
tensor([ 8, 9, 10, 11])
## Actual results
<torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208>
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 474, in _next_data
index = self._next_index() # may raise StopIteration
File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
for idx in self.sampler:
File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 67, in __iter__
return iter(range(len(self.data_source)))
TypeError: object of type 'IterableDataset' has no len()
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: '1.8.1.dev0'
- Platform: Linux
- Python version: Python 3.6.8
- PyArrow version: '3.0.0'
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2583/timeline
|
completed
| false |
https://api.github.com/repos/huggingface/datasets/issues/2582
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2582/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2582/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2582/events
|
https://github.com/huggingface/datasets/pull/2582
| 935,859,104 |
MDExOlB1bGxSZXF1ZXN0NjgyNzAzNzg3
| 2,582 |
Add skip and take
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T15:10:19 | 2021-07-05T16:06:40 | 2021-07-05T16:06:39 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2582",
"html_url": "https://github.com/huggingface/datasets/pull/2582",
"diff_url": "https://github.com/huggingface/datasets/pull/2582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2582.patch",
"merged_at": "2021-07-05T16:06:39"
}
|
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.
You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`
One implementation detail:
Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.
I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.
I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation
cc @vblagoje @lewtun
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2582/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2582/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2581
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2581/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2581/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2581/events
|
https://github.com/huggingface/datasets/pull/2581
| 935,783,588 |
MDExOlB1bGxSZXF1ZXN0NjgyNjQwMDY4
| 2,581 |
Faster search_batch for ElasticsearchIndex due to threading
|
{
"login": "mwrzalik",
"id": 1376337,
"node_id": "MDQ6VXNlcjEzNzYzMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwrzalik",
"html_url": "https://github.com/mwrzalik",
"followers_url": "https://api.github.com/users/mwrzalik/followers",
"following_url": "https://api.github.com/users/mwrzalik/following{/other_user}",
"gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions",
"organizations_url": "https://api.github.com/users/mwrzalik/orgs",
"repos_url": "https://api.github.com/users/mwrzalik/repos",
"events_url": "https://api.github.com/users/mwrzalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwrzalik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T13:42:07 | 2021-07-12T14:13:46 | 2021-07-12T09:52:51 |
CONTRIBUTOR
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2581",
"html_url": "https://github.com/huggingface/datasets/pull/2581",
"diff_url": "https://github.com/huggingface/datasets/pull/2581.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2581.patch",
"merged_at": "2021-07-12T09:52:51"
}
|
Hey,
I think it makes sense to perform search_batch threaded, so ES can perform search in parallel.
Cheers!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2581/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2580
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2580/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2580/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2580/events
|
https://github.com/huggingface/datasets/pull/2580
| 935,767,421 |
MDExOlB1bGxSZXF1ZXN0NjgyNjI2MTkz
| 2,580 |
Fix Counter import
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T13:21:48 | 2021-07-02T14:37:47 | 2021-07-02T14:37:46 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2580",
"html_url": "https://github.com/huggingface/datasets/pull/2580",
"diff_url": "https://github.com/huggingface/datasets/pull/2580.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2580.patch",
"merged_at": "2021-07-02T14:37:46"
}
|
Import from `collections` instead of `typing`.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2580/timeline
| null | true |
https://api.github.com/repos/huggingface/datasets/issues/2579
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2579/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2579/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2579/events
|
https://github.com/huggingface/datasets/pull/2579
| 935,486,894 |
MDExOlB1bGxSZXF1ZXN0NjgyMzkyNjYx
| 2,579 |
Fix BibTeX entry
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?",
"Thanks for the help, @albertvillanova! All tests are passing now."
] | 2021-07-02T07:10:40 | 2021-07-02T07:33:44 | 2021-07-02T07:33:44 |
MEMBER
| false |
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2579",
"html_url": "https://github.com/huggingface/datasets/pull/2579",
"diff_url": "https://github.com/huggingface/datasets/pull/2579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2579.patch",
"merged_at": "2021-07-02T07:33:44"
}
|
Add missing contributor to BibTeX entry.
cc: @abhishekkrthakur @thomwolf
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2579/timeline
| null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.