url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4012/comments | https://api.github.com/repos/huggingface/transformers/issues/4012/events | https://github.com/huggingface/transformers/issues/4012 | 607,510,007 | MDU6SXNzdWU2MDc1MTAwMDc= | 4,012 | Attribute error while using run_language_modeling.py with Transformer-XL | {
"login": "TakLaszlo",
"id": 55270354,
"node_id": "MDQ6VXNlcjU1MjcwMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/55270354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TakLaszlo",
"html_url": "https://github.com/TakLaszlo",
"followers_url": "https://api.github.com/users/TakLaszlo/followers",
"following_url": "https://api.github.com/users/TakLaszlo/following{/other_user}",
"gists_url": "https://api.github.com/users/TakLaszlo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TakLaszlo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TakLaszlo/subscriptions",
"organizations_url": "https://api.github.com/users/TakLaszlo/orgs",
"repos_url": "https://api.github.com/users/TakLaszlo/repos",
"events_url": "https://api.github.com/users/TakLaszlo/events{/privacy}",
"received_events_url": "https://api.github.com/users/TakLaszlo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, sorry, it's been a while, but this was fixed in #4759 since."
] | 1,587 | 1,592 | 1,592 | NONE | null | python3 /home/username/.local/lib/python3.7/site-packages/transformers/transformers/examples/run_language_modeling.py \
--output_dir=_outpath_
--model_type=Transformer-XL\
--model_name_or_path=transfo-xl-wt103 \
--do_train \
--train_data_file=_pathtodataset_
--per_gpu_train_batch_size=1\
--gradient_accumulation_steps=30\
--train_epochs=50
Every time I run the above script I get the following error:
Traceback (most recent call last):
File "/content/Essay/run_language_modeling.py", line 284, in <module>
main()
File "/content/Essay/run_language_modeling.py", line 208, in main
model.resize_token_embeddings(len(tokenizer))
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 336, in resize_token_embeddings
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 351, in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 372, in _get_resized_embeddings
old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
**AttributeError: 'AdaptiveEmbedding' object has no attribute 'weight'**
| | |
|----|---|
|OS |Fedora 29/Google Colab|
|Python|3.6|
|Pytorch|1.4.0/1.5.0|
The problem arises in each 4 cases with the XL model but not with other models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4011/comments | https://api.github.com/repos/huggingface/transformers/issues/4011/events | https://github.com/huggingface/transformers/issues/4011 | 607,465,369 | MDU6SXNzdWU2MDc0NjUzNjk= | 4,011 | Report minimum system requirements for each architecture. | {
"login": "ysig",
"id": 28439529,
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysig",
"html_url": "https://github.com/ysig",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"repos_url": "https://api.github.com/users/ysig/repos",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Hi,
It would be nice if you wrote some minimum requirements in all of your models.
Like the size of the GPU (or the number of GPUs) for a batch size of 1 (or n_gpu).
This would make it really helpful for trying new staff.
For example what are the requirements of re-training or generating with CTRL?
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4011/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4010/comments | https://api.github.com/repos/huggingface/transformers/issues/4010/events | https://github.com/huggingface/transformers/issues/4010 | 607,397,798 | MDU6SXNzdWU2MDczOTc3OTg= | 4,010 | fairseq-preprocess, Why are the number of rows in src and trg different? | {
"login": "DamonCC",
"id": 35246891,
"node_id": "MDQ6VXNlcjM1MjQ2ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/35246891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DamonCC",
"html_url": "https://github.com/DamonCC",
"followers_url": "https://api.github.com/users/DamonCC/followers",
"following_url": "https://api.github.com/users/DamonCC/following{/other_user}",
"gists_url": "https://api.github.com/users/DamonCC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DamonCC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DamonCC/subscriptions",
"organizations_url": "https://api.github.com/users/DamonCC/orgs",
"repos_url": "https://api.github.com/users/DamonCC/repos",
"events_url": "https://api.github.com/users/DamonCC/events{/privacy}",
"received_events_url": "https://api.github.com/users/DamonCC/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"have you fixed this problem?"
] | 1,587 | 1,680 | 1,593 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
After I used subword_nmt for bpe, when using preprocess, I found that the number of rows read src and trg are not equal, but I checked the data after bpe, the number of rows of src and trg are equal, although trg has dozens of Blank lines, the results here show that the gap between the number of lines in src and trg is much larger than the number of blank lines.


<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4009/comments | https://api.github.com/repos/huggingface/transformers/issues/4009/events | https://github.com/huggingface/transformers/pull/4009 | 607,350,919 | MDExOlB1bGxSZXF1ZXN0NDA5MzQxOTU3 | 4,009 | Implemented lazy line-by-line text data set loading for LM example script | {
"login": "GCHQResearcher92457",
"id": 62057951,
"node_id": "MDQ6VXNlcjYyMDU3OTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/62057951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GCHQResearcher92457",
"html_url": "https://github.com/GCHQResearcher92457",
"followers_url": "https://api.github.com/users/GCHQResearcher92457/followers",
"following_url": "https://api.github.com/users/GCHQResearcher92457/following{/other_user}",
"gists_url": "https://api.github.com/users/GCHQResearcher92457/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GCHQResearcher92457/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GCHQResearcher92457/subscriptions",
"organizations_url": "https://api.github.com/users/GCHQResearcher92457/orgs",
"repos_url": "https://api.github.com/users/GCHQResearcher92457/repos",
"events_url": "https://api.github.com/users/GCHQResearcher92457/events{/privacy}",
"received_events_url": "https://api.github.com/users/GCHQResearcher92457/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Used this for training a model, worked great! Would love to see this integrated",
"@GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things?\r\n\r\nPS: Sorry about the super long review time:)",
"> @GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things?\r\n> \r\n> PS: Sorry about the super long review time:)\r\n\r\nSure. I think the GPT thing was a bit of rabbit hole. I added the hacks with pad tokens because I thought I'd introduced a problem with lazy loading, without realising that the problem was in fact already there with line-by-line.",
"> @GCHQResearcher92457 @BramVanroy Does it work for you if we tweak the PR on your fork's branch so that we can remove the force_pad_token option and update a few things?\r\n> \r\n> PS: Sorry about the super long review time:)\r\n\r\nYes, definitely seems lik a good way to go!",
"Hello everyone, I think this PR will be a huge addition to Transformers.\r\nIs there any plans to finish it soon?\r\nThanks!",
"> Hello everyone, I think this PR will be a huge addition to Transformers.\r\n> Is there any plans to finish it soon?\r\n> Thanks!\r\n\r\nThis is in the hands of @julien-c now, but I think he's on holiday at the moment.",
"Isn't this superseded by `huggingface/nlp` now? I'll let others chime in.",
"> Isn't this superseded by `huggingface/nlp` now? I'll let others chime in.\r\n\r\nAre all examples now fully using `nlp`? If so, then yes and this can be closed. But if the examples are still using the trainer/dataset of `transformers`, then this seems a separate issue.",
"I have no objection to merge this temporarily, if remarks from the comments are taken into accounts, merge conflicts handled and deprecated API (the data collator should implement `__call__` and `tokenizer.batch_encode_plus` should not be used, just the tokenizer `__call__`) replaced. That may be a lot of work for something that will eventually be handled by nlp though.\r\n\r\nMoving the examples to nlp is on my TODO for the near-future @BramVanroy, and I think @thomwolf is also planning on working on this.",
"When I try to run this code following the example [here ](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ltXgXyCbAJLY)I get the below error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"bla.py\", line 209, in <module>\r\n trainer.train()\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 492, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/tqdm/std.py\", line 1107, in __iter__\r\n for obj in iterable:\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 345, in __next__\r\n data = self._next_data()\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 385, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n return self.collate_fn(data)\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py\", line 83, in __call__\r\n inputs, labels = self.mask_tokens(batch)\r\n File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py\", line 113, in mask_tokens\r\n labels = inputs.clone()\r\nAttributeError: 'tuple' object has no attribute 'clone'\r\nEpoch: 0%| | 0/1 [00:23<?, ?it/s]Iteration: 0%| | 0/976243 [00:23<?, ?it/s]\r\n```\r\n",
"> When I try to run this code following the example [here ](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=ltXgXyCbAJLY)I get the below error:\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"bla.py\", line 209, in <module>\r\n> trainer.train()\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/trainer.py\", line 492, in train\r\n> for step, inputs in enumerate(epoch_iterator):\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/tqdm/std.py\", line 1107, in __iter__\r\n> for obj in iterable:\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 345, in __next__\r\n> data = self._next_data()\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py\", line 385, in _next_data\r\n> data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py\", line 47, in fetch\r\n> return self.collate_fn(data)\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py\", line 83, in __call__\r\n> inputs, labels = self.mask_tokens(batch)\r\n> File \"/home/edraff/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py\", line 113, in mask_tokens\r\n> labels = inputs.clone()\r\n> AttributeError: 'tuple' object has no attribute 'clone'\r\n> Epoch: 0%| | 0/1 [00:23<?, ?it/s]Iteration: 0%| | 0/976243 [00:23<?, ?it/s]\r\n> ```\r\n\r\nNot sure but I think this PR hasn't been updated to reflect recent changes.",
"Hi @GCHQResearcher92457 ,\r\n\r\nThanks for your great work. \r\nI am trying to use this lazy loading pre-training script to train a RoBERTa from scratch. \r\nI tested it many times. It works well when the training data less than 100 million lines. \r\n\r\nBut the script is always killed at `linecache.getline(...)`, if my training set is more than 100M lines (e.g., 1 billion). \r\nError is: \r\n```\r\ndied with <Signals.SIGKILL: 9>.\r\n```\r\nI checked my CPU and GPU usage, they are not full. I also changed size of `_get_n_lines(...)` function and the batch size. But it still doesn't work. I don't believe this is out of memory issues. \r\n\r\nI cloned your transformers repo and use the branch `lazy-text-dataset-loading-for-lm` to install transformers library.\r\n\r\nCould you please give me any idea about this problem?\r\n\r\nThanks,\r\nChiyu\r\n\r\nMore info:\r\nPython: 3.6.8\r\nTorch Version: 1.4.0\r\ntensorflow Version: 2.3.0\r\n\r\nI am also using distributed training to run the model. \r\n",
"@chiyuzhang94 You probably have a program killer running. This is a background process that monitors the memory usage of the individual processes. If the system is about to run out of memory, it will kill the abusive process. My hunch is that Colab uses something similar.\r\n\r\nThe high memory usage occurs because linecache reads as much of the file into memory as it can, to have the optimal experience. Not all OS's seem to like this - although I have not have had any issues with this approach on my systems.\r\n\r\nHere's a good article: https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9",
"> @chiyuzhang94 You probably have a program killer running. This is a background process that monitors the memory usage of the individual processes. If the system is about to run out of memory, it will kill the abusive process. My hunch is that Colab uses something similar.\r\n> \r\n> The high memory usage occurs because linecache reads as much of the file into memory as it can, to have the optimal experience. Not all OS's seem to like this - although I have not have had any issues with this approach on my systems.\r\n> \r\n> Here's a good article: https://dev.to/rrampage/surviving-the-linux-oom-killer-2ki9\r\n\r\nThanks, @BramVanroy.\r\n\r\nI think it is hard for me to change the `oom_score_adj` because I need to submit a job to PBS job to run model.\r\nI am wondering whether I can control the size of files that linecache reads. I think the size in function `def _get_n_lines(fin, size=65536):` is the controller. But it still doesn't work if I decrease the size. ",
"@chiyuzhang94 No, that function is not related to the caching. It is a function that very quickly can read through files to figure out how many lines there are in that file. The size is the chunks in bytes to read sequentially, which is much faster than reading line-per-line. But again, nothing to do with caching.\r\n\r\nOne option that I can think of, is allowing for an argument `max_memory_usage`, that will check at every `__getitem__` call the current memory usage (either system memory usage or current script memory usage), and if the memory usage is more than `max_memory_usage` the script should call `linecache.clearcache()`. This will be _slow_ when you have little memory or a low max value, but it should work.",
"> @chiyuzhang94 No, that function is not related to the caching. It is a function that very quickly can read through files to figure out how many lines there are in that file. The size is the chunks in bytes to read sequentially, which is much faster than reading line-per-line. But again, nothing to do with caching.\r\n> \r\n> One option that I can think of, is allowing for an argument `max_memory_usage`, that will check at every `__getitem__` call the current memory usage (either system memory usage or current script memory usage), and if the memory usage is more than `max_memory_usage` the script should call `linecache.clearcache()`. This will be _slow_ when you have little memory or a low max value, but it should work.\r\n\r\nThanks, @BramVanroy ,\r\n\r\nI tried your suggestion:\r\n``` \r\ndef __getitem__(self, idx):\r\n # Basic Memory checking from https://stackoverflow.com/a/48397534\r\n with open ('/proc/self/status') as f:\r\n memusage = f.read().split('VmRSS:')[1].split('\\n')[0][:-3]\r\n\r\n logger.info(\" memusage each time: %s\", memusage)\r\n # If our memory usage exceeds a limit flush the cache to prevent OOM situations\r\n if int(memusage.strip()) > self.max_memory_usage and self.max_memory_usage > 0:\r\n logger.info(\" memusage before: %s\", memusage)\r\n linecache.clearcache()\r\n logger.info(\" memusage after: %s\", memusage)\r\n\r\n # linecache starts counting from one, not zero, +1 the given index\r\n return linecache.getline(self.file_path, idx + 1).rstrip()\r\n```\r\n\r\nBut I found the linecache.clearcache() doesn't help based on the log. \r\n```\r\nIteration: 0%| | 0/1097530 [00:00<?, ?it/s]\u001b[AI0826 18:51:17.077926 47405170347712 \r\nI0826 18:51:17.080428 47405170347712 ARC_run_language_modeling_emohash.py:166] memusage before: \t38945572\r\nI0826 18:51:17.081127 47405170347712 ARC_run_language_modeling_emohash.py:169] memusage after: \t38945572\r\nI0826 18:51:27.666305 47348526792384 ARC_run_language_modeling_emohash.py:162] memusage each time: \t39182488\r\nI0826 18:51:27.670411 47348526792384 ARC_run_language_modeling_emohash.py:166] memusage before: \t39182488\r\nI0826 18:51:27.670989 47348526792384 ARC_run_language_modeling_emohash.py:169] memusage after: \t39182488\r\nI0826 18:51:43.620446 47109816241856 ARC_run_language_modeling_emohash.py:162] memusage each time: \t39184224\r\nI0826 18:51:43.620970 47109816241856 ARC_run_language_modeling_emohash.py:166] memusage before: \t39184224\r\nI0826 18:51:43.621682 47109816241856 ARC_run_language_modeling_emohash.py:169] memusage after: \t39184224\r\nI0826 18:51:49.295235 47667525713600 ARC_run_language_modeling_emohash.py:162] memusage each time: \t38993432\r\nI0826 18:51:49.295728 47667525713600 ARC_run_language_modeling_emohash.py:166] memusage before: \t38993432\r\nI0826 18:51:49.296677 47667525713600 ARC_run_language_modeling_emohash.py:169] memusage after: \t38993432\r\n```\r\nThen, the job was killed. \r\n\r\nI noticed I am using distributed training where each node has 4 GPUs. Since each of the 4 python threads eventually reads the entire file (90GB) into memory the dataset would take up over 360GB per node if they fully loaded the dataset. But each node only have 186GB RAM. \r\n\r\nDo you have any suggestion to limit the caching size?",
"any progress? @GCHQResearcher92457 ",
"Hi @BramVanroy @GCHQResearcher92457 ,\r\n\r\nI found a point that might be causing memory issues in the code (https://github.com/GCHQResearcher92457/transformers/blob/lazy-text-dataset-loading-for-lm/examples/run_language_modeling.py). \r\n\r\nIn the main function, the rank 1-3 threads will all stop at the barrier at line 770 and rank 0 will progress and load the model and vocab it will then hit line 825 and release the barrier. Once the barrier is released threads 1-3 will process the lines 770-825 (load model in the main function). Same for line 832-837 (load dataset). \r\n\r\nI have four GPUs at each node. Hence, the rank 1-3 load the model and dataset from disk individually instead of using a copy from rank 0. This leads to the OOM issue. \r\n\r\nI think the rank 1-3 threads should not run the line 832-837 again once the barrier released. But I added some log found: When a process hits a barrier is simply waits at that spot in the code until all other processes have hit a barrier. Then when it releases it continues from the point it is within the code, not jumping to the latest barrier.\r\n\r\nI tried to add an if condition at line 770. This only allows rank 0 to load the model. But I got a new error. That shows the variables are not synchronized across devices. Rank 1-3 cannot get variable `model`.\r\n\r\nDid you notice this issue? Do you have any suggestions?\r\n",
"@chiyuzhang94 I am not sure why the memory is not clearing after using clearcache. It might be that you still have to call the garbage collector after clearing the cache, you can try that.\r\n\r\nIt is true that I had not thought about multinode support so you will indeed have multiple in-memory caches for each process. I do not think it is easy to by-pass that, unless by turning around all of the code and instead working with a dedicated reading process, which is a separate process that fetches lines from the data file.\r\n\r\nAs has been said before, though, it is now recommended to switch over to https://github.com/huggingface/nlp which allows for on-disk datasets which are fast and have a low memory footprint.",
"@BramVanroy (For the sake of discussion) \r\n\r\nWouldn't it be reasonably easy to enable (non-cached) random access to the text file(s) by storing a list of the positions of `\"\\n\"` and then doing `fseek`s on the fly (ideally, using a sampler that yields batches of sequential lines, so that one batch needs only one file read)? ",
"> @BramVanroy (For the sake of discussion)\r\n> \r\n> Wouldn't it be reasonably easy to enable (non-cached) random access to the text file(s) by storing a list of the positions of `\"\\n\"` and then doing `fseek`s on the fly (ideally, using a sampler that yields batches of sequential lines, so that one batch needs only one file read)?\r\n\r\nShouldn't be too hard to implement indeed, although my fear is that this might not be fast enough from an IO perspective. That is perhaps the trade-off that one would want to make, though, so it might be worth it.\r\n\r\nYou'd still need to make sure that all data is actually used, so in a shuffle setting this might not be straightforward if you want batches of consistent size. Perhaps depending on the number of lines, you can create a list of indexes that have `batch_size` distance between them (e.g. 0, 64, 128, 256), and then shuffle those indexes and at each iteration select one randomly that has not been seen yet. Then select `batch_size` lines starting from that index. That, in combination with your suggestion of getting the positions of \\n should work indeed!\r\n\r\nI am not sure whether I want to put time into this, though, seeing that `nlp` is the preferred way to go.",
"> @chiyuzhang94 I am not sure why the memory is not clearing after using clearcache. It might be that you still have to call the garbage collector after clearing the cache, you can try that.\r\n> \r\n> It is true that I had not thought about multinode support so you will indeed have multiple in-memory caches for each process. I do not think it is easy to by-pass that, unless by turning around all of the code and instead working with a dedicated reading process, which is a separate process that fetches lines from the data file.\r\n> \r\n> As has been said before, though, it is now recommended to switch over to https://github.com/huggingface/nlp which allows for on-disk datasets which are fast and have a low memory footprint.\r\n\r\nHi @BramVanroy ,\r\n\r\nThanks for your suggestion.\r\n\r\nI looked at the `nlp` tool. \r\nI didn't find an example of loading a text file for LM pre-training. \r\nI adapted the dataset loading class like this:\r\n```\r\nclass DatasetNLP(Dataset):\r\n def __init__(self, filename, cache_dir, args):\r\n self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)[\"train\"][\"text\"]\r\n\r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n def __getitem__(self, index):\r\n line = self.dataset[index]\r\n return line\r\n```\r\n\r\nI am wondering whether this is the optimal way to use `nlp` with PyTorch dataloader. ",
"I used that approach in my way to train a LM (RoBERTa like) from scratch. I didn't modified the dataloader. It works for some iterations but it ends sooner than later with kind of CUBLAS ERROR",
"I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today).\r\n\r\nYour example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)[\"train\"][\"text\"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive.\r\n\r\nYou can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset",
"> > I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today).\r\n> > Your example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)[\"train\"][\"text\"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive.\r\n> > You can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset\r\n> \r\n> Hi, I was wondering is it possible to finish the lazydataloader today?\r\n> I am a little bit eager for this function.\r\n> I would really appreciate your help. Thanks!\r\n\r\nNo, that is not possible. You cannot expect a company to open-source a great product and at the same time implementing features within the day.\r\n\r\nAs said numerous times in this topic, try out the `nlp` repository instead. It will help you out with any memory issues that you might have.",
"> I'll start updating the examples to use the datasets library as soon as our new `nlp` release is out (probably today).\r\n> \r\n> Your example @chiyuzhang94 is ok but by doing `self.dataset = load_dataset('text', data_files= filename, cache_dir=cache_dir)[\"train\"][\"text\"]` you are loading all the dataset in RAM which is too bad because nlp can do memory mapping from drive.\r\n> \r\n> You can directly use the dataset in a data loader by using `set_format(type='torch')`. More information is here: https://huggingface.co/nlp/master/quicktour.html#formatting-the-dataset\r\n\r\nHi @thomwolf ,\r\nThanks for your suggestion. \r\n\r\nI tried to implement this to load my text file. This `test.txt` is a simple sample where each line is a sentence. \r\n```\r\ndataset = load_dataset('text', data_files='test.txt',cache_dir=\"./\")\r\ndataset.set_format(type='torch',columns=[\"text\"])\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8)\r\nnext(iter(dataloader))\r\n```\r\nBut dataload cannot yield sample and error is:\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-28-388aca337e2f> in <module>\r\n----> 1 next(iter(dataloader))\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 343 \r\n 344 def __next__(self):\r\n--> 345 data = self._next_data()\r\n 346 self._num_yielded += 1\r\n 347 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 383 def _next_data(self):\r\n 384 index = self._next_index() # may raise StopIteration\r\n--> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 386 if self._pin_memory:\r\n 387 data = _utils.pin_memory.pin_memory(data)\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n/Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\nKeyError: 0\r\n```\r\n\r\n`dataset.set_format(type='torch',columns=[\"text\"])` returns a log says: \r\n`Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns.` \r\n\r\nI noticed the `dataset` is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`. \r\nEach sample can be accessed by `dataset[\"train\"][\"text\"]`. \r\n\r\nI don't know how to modify this code to load the text file. Could you please give me any suggestions? ",
"@chiyuzhang94 Can you please ask your question either [on the forums](https://discuss.huggingface.co/) or [on the respective repository](https://github.com/huggingface/nlp)? Your question is not a `transformers` question anymore, nor should PRs be used for general questions like this.",
"> @chiyuzhang94 Can you please ask your question either [on the forums](https://discuss.huggingface.co/) or [on the respective repository](https://github.com/huggingface/nlp)? Your question is not a `transformers` question anymore, nor should PRs be used for general questions like this.\r\n\r\nSure. Thanks for your investigation. I posted this question here: https://github.com/huggingface/datasets/issues/610#issue-698349388. @BramVanroy @thomwolf ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,605 | 1,605 | NONE | null | See PR #3388. Master changed substantially, requiring relocation of code into previously untouched files etc. Instead, here is a new PR using the same code but refactored to fit in to the new more modular structure of the scripts in `examples`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4009/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4009",
"html_url": "https://github.com/huggingface/transformers/pull/4009",
"diff_url": "https://github.com/huggingface/transformers/pull/4009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4009.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4008/comments | https://api.github.com/repos/huggingface/transformers/issues/4008/events | https://github.com/huggingface/transformers/pull/4008 | 607,348,492 | MDExOlB1bGxSZXF1ZXN0NDA5MzM5OTcw | 4,008 | camembert-base-fquad | {
"login": "mdhoffschmidt",
"id": 6451662,
"node_id": "MDQ6VXNlcjY0NTE2NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6451662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mdhoffschmidt",
"html_url": "https://github.com/mdhoffschmidt",
"followers_url": "https://api.github.com/users/mdhoffschmidt/followers",
"following_url": "https://api.github.com/users/mdhoffschmidt/following{/other_user}",
"gists_url": "https://api.github.com/users/mdhoffschmidt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mdhoffschmidt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mdhoffschmidt/subscriptions",
"organizations_url": "https://api.github.com/users/mdhoffschmidt/orgs",
"repos_url": "https://api.github.com/users/mdhoffschmidt/repos",
"events_url": "https://api.github.com/users/mdhoffschmidt/events{/privacy}",
"received_events_url": "https://api.github.com/users/mdhoffschmidt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=h1) Report\n> Merging [#4008](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4008 +/- ##\n==========================================\n- Coverage 78.44% 77.48% -0.97% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14348 -179 \n- Misses 3991 4170 +179 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.19% <0.00%> (-2.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.44% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.95% <0.00%> (-1.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4008/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.20% <0.00%> (-0.70%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=footer). Last update [4e817ff...7012a06](https://codecov.io/gh/huggingface/transformers/pull/4008?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's awesome. Did you share this with the camembert team (@louismartin @benjamin-mlr et al)?",
"Thanks for the ping @julien-c, yes they shared their awesome work with us!\r\nAnd thanks @mdhoffschmidt for releasing your model :) ",
"Great ! Thanks @julien-c and @louismartin :) \r\n\r\n"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Model card for illuin release of camembert-base-fquad | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4008",
"html_url": "https://github.com/huggingface/transformers/pull/4008",
"diff_url": "https://github.com/huggingface/transformers/pull/4008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4008.patch",
"merged_at": 1588026596000
} |
https://api.github.com/repos/huggingface/transformers/issues/4007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4007/comments | https://api.github.com/repos/huggingface/transformers/issues/4007/events | https://github.com/huggingface/transformers/pull/4007 | 607,343,601 | MDExOlB1bGxSZXF1ZXN0NDA5MzM1OTYz | 4,007 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=h1) Report\n> Merging [#4007](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4007 +/- ##\n==========================================\n- Coverage 78.44% 78.44% -0.01% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14526 -1 \n- Misses 3991 3992 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4007/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=footer). Last update [4e817ff...0bb517d](https://codecov.io/gh/huggingface/transformers/pull/4007?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4007",
"html_url": "https://github.com/huggingface/transformers/pull/4007",
"diff_url": "https://github.com/huggingface/transformers/pull/4007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4007.patch",
"merged_at": 1588026467000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4006/comments | https://api.github.com/repos/huggingface/transformers/issues/4006/events | https://github.com/huggingface/transformers/pull/4006 | 607,335,940 | MDExOlB1bGxSZXF1ZXN0NDA5MzI5NzY4 | 4,006 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Nice!\r\n\r\n[model page](https://huggingface.co/mrm8488/bert-small-finetuned-typo-detection)",
"I plan to add a Colab to show the whole process: download, preprocess, train, eval and upload to HF"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4006",
"html_url": "https://github.com/huggingface/transformers/pull/4006",
"diff_url": "https://github.com/huggingface/transformers/pull/4006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4006.patch",
"merged_at": 1588026425000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4005/comments | https://api.github.com/repos/huggingface/transformers/issues/4005/events | https://github.com/huggingface/transformers/issues/4005 | 607,315,954 | MDU6SXNzdWU2MDczMTU5NTQ= | 4,005 | Fast Tokenizers do not work when `return_offsets_mapping=True, return_tensors="pt"` | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@GuillemGSubies I think this works with the lastest `master` version of Transformers and using `tokenizers` in version *0.7.0*:\r\n\r\n```python\r\nfrom transformers import BertTokenizerFast\r\nfast = BertTokenizerFast.from_pretrained(\"bert-base-cased\")\r\nfast.encode_plus(\"Hello I am tokenizing\", return_offsets_mapping=True, return_tensors=\"pt\")\r\n```\r\n\r\nOutputs:\r\n\r\n```bash\r\n{'input_ids': tensor([[ 101, 8667, 146, 1821, 22559, 4404, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]]), 'offset_mapping': tensor([[[ 0, 0],\r\n [ 0, 5],\r\n [ 6, 7],\r\n [ 8, 10],\r\n [11, 16],\r\n [16, 21],\r\n [ 0, 0]]])}\r\n```",
"Oh, I should have tried in master before opening the issue. I will wait for the next release then, thanks you."
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
When I try the following code:
```python
from transformers import BertTokenizerFast
fast = BertTokenizerFast.from_pretrained("bert-base-cased")
fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True)
```
It works as intended.
However, if I try:
```python
from transformers import BertTokenizerFast
fast = BertTokenizerFast.from_pretrained("bert-base-cased")
fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True, return_tensors="pt")
```
It throws the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-c6ee501a801c> in <module>
1 from transformers import BertTokenizer, BertTokenizerFast
2 fast = BertTokenizerFast.from_pretrained("bert-base-cased")
----> 3 fast.encode_plus("Hello I am tokenizing", return_offsets_mapping=True, return_tensors="pt")
~/.conda/envs/bertology/lib/python3.7/site-packages/transformers/tokenization_utils.py in encode_plus(self, text, text_pair, add_special_tokens, max_length, pad_to_max_length, stride, truncation_strategy, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, **kwargs)
1972 return_offsets_mapping=return_offsets_mapping,
1973 pad_to_max_length=pad_to_max_length,
-> 1974 **kwargs,
1975 )
1976
~/.conda/envs/bertology/lib/python3.7/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, **kwargs)
1926 stack = tf.stack(stack, axis=0)
1927 elif return_tensors == "pt":
-> 1928 stack = torch.stack(stack, dim=0)
1929 elif not return_tensors and len(stack) == 1:
1930 stack = stack[0]
TypeError: expected Tensor as element 0 in argument 0, but got list
```
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-4.4.0-174-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4005/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4004/comments | https://api.github.com/repos/huggingface/transformers/issues/4004/events | https://github.com/huggingface/transformers/issues/4004 | 607,260,981 | MDU6SXNzdWU2MDcyNjA5ODE= | 4,004 | ALBERT with Masked Language Model Input Processing from SavedModel | {
"login": "zzj0402",
"id": 15345547,
"node_id": "MDQ6VXNlcjE1MzQ1NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/15345547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zzj0402",
"html_url": "https://github.com/zzj0402",
"followers_url": "https://api.github.com/users/zzj0402/followers",
"following_url": "https://api.github.com/users/zzj0402/following{/other_user}",
"gists_url": "https://api.github.com/users/zzj0402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zzj0402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zzj0402/subscriptions",
"organizations_url": "https://api.github.com/users/zzj0402/orgs",
"repos_url": "https://api.github.com/users/zzj0402/repos",
"events_url": "https://api.github.com/users/zzj0402/events{/privacy}",
"received_events_url": "https://api.github.com/users/zzj0402/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi! I have the similar problem too. Have you fixed it? If so, how?"
] | 1,587 | 1,594 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): ALBERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
input_ids=tf.convert_to_tensor(input_ids,name='input_ids',dtype=tf.int32)
attention_mask=tf.convert_to_tensor(attention_mask,name='attention_mask',dtype=tf.int32)
position_ids=tf.convert_to_tensor(position,name='position_ids',dtype=tf.int32)
token_type_ids=tf.convert_to_tensor(type_emb,name='token_type_ids',dtype=tf.int32)
inputs={'input_ids': input_ids, 'position_ids': position_ids, 'token_type_ids': token_type_ids, 'attention_mask': attention_mask}
print(inputs)
outputs=model(inputs,training=False)
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set the inputs
2. Run the above script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (1 total):
* {'input_ids': <tf.Tensor 'inputs_1:0' shape=(512,) dtype=int32>, 'position_ids': <tf.Tensor 'inputs_2:0' shape=(512,) dtype=int32>, 'token_type_ids': <tf.Tensor 'inputs_3:0' shape=(512,) dtype=int32>, 'attention_mask': <tf.Tensor 'inputs:0' shape=(512,) dtype=int32>}
Keyword arguments: {'training': False}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (1 total):
* {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids')}
Keyword arguments: {'training': True}
Option 2:
Positional arguments (1 total):
* {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/input_ids')}
Keyword arguments: {'training': True}
Option 3:
Positional arguments (1 total):
* {'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='input_ids'), 'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='attention_mask')}
Keyword arguments: {'training': False}
Option 4:
Positional arguments (1 total):
* {'position_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/position_ids'), 'token_type_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/attention_mask'), 'input_ids': TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs/input_ids')}
Keyword arguments: {'training': False}
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The model makes prediction on the inputs
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Ubuntu 16
- Python version: 3.7.7
- Tensorflow version (GPU?): 2.1.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4003/comments | https://api.github.com/repos/huggingface/transformers/issues/4003/events | https://github.com/huggingface/transformers/issues/4003 | 607,243,113 | MDU6SXNzdWU2MDcyNDMxMTM= | 4,003 | Maybe it is a bug for Roberta vocab file. | {
"login": "czheng17",
"id": 26885496,
"node_id": "MDQ6VXNlcjI2ODg1NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/26885496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czheng17",
"html_url": "https://github.com/czheng17",
"followers_url": "https://api.github.com/users/czheng17/followers",
"following_url": "https://api.github.com/users/czheng17/following{/other_user}",
"gists_url": "https://api.github.com/users/czheng17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czheng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czheng17/subscriptions",
"organizations_url": "https://api.github.com/users/czheng17/orgs",
"repos_url": "https://api.github.com/users/czheng17/repos",
"events_url": "https://api.github.com/users/czheng17/events{/privacy}",
"received_events_url": "https://api.github.com/users/czheng17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @czheng17,\r\n\r\nthat's because RoBERTa uses a byte-level BPE (like GPT-2), see section 4.4 in the original paper.\r\n\r\nA good explanation can be found here:\r\n\r\nhttps://github.com/pytorch/fairseq/issues/1716#issuecomment-588750983\r\n\r\n:)",
"Hi @stefan-it ,\r\n\r\nInteresting! Thank you for your help!\r\n\r\nBest,"
] | 1,587 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [1 ] the official example scripts: (give details below)
* [1] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [1 ] my own task or dataset: (give details below)
Hi Huggingface team,
When I use Roberta, I found that there are so many words start with "Ġ" in the vocab file you provided. I just want to make sure whether it is a bug or not. Around 40,771 words start with "Ġ".
Roberta-base vocab file: https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json
Roberta-large vocab file:
https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json
Best Regards,
Chen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4003/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4002/comments | https://api.github.com/repos/huggingface/transformers/issues/4002/events | https://github.com/huggingface/transformers/issues/4002 | 607,219,357 | MDU6SXNzdWU2MDcyMTkzNTc= | 4,002 | 🌟 Transformer Lite | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
Part of the abstract :
> In this paper, we present an efficient mobile NLP architecture, Lite Transformer to
facilitate deploying mobile NLP applications on edge devices. [...] brings consistent improvement over the vanilla transformer on three well-established language
tasks: machine translation, abstractive summarization, and language modeling. [...] Notably, Lite Transformer outperforms the AutoML-based Evolved
Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly
architecture search that requires more than 250 GPU years
## Open source status
* [x] the model implementation is available: [Repository](https://github.com/mit-han-lab/lite-transformer)
* [x] the model weights are available: [In the README](https://github.com/mit-han-lab/lite-transformer#models)
* [x] who are the authors: @Michaelvll
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4002/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4002/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4001/comments | https://api.github.com/repos/huggingface/transformers/issues/4001/events | https://github.com/huggingface/transformers/issues/4001 | 607,217,231 | MDU6SXNzdWU2MDcyMTcyMzE= | 4,001 | TF BART ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale"
] | 1,587 | 1,603 | 1,603 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
BART is currently only available in Pytorch.
**Are you planning to release a TF version ?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4001/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4000/comments | https://api.github.com/repos/huggingface/transformers/issues/4000/events | https://github.com/huggingface/transformers/pull/4000 | 607,210,813 | MDExOlB1bGxSZXF1ZXN0NDA5MjMyNzc5 | 4,000 | Adding --do_lower_case option | {
"login": "tkhs623",
"id": 17961973,
"node_id": "MDQ6VXNlcjE3OTYxOTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/17961973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tkhs623",
"html_url": "https://github.com/tkhs623",
"followers_url": "https://api.github.com/users/tkhs623/followers",
"following_url": "https://api.github.com/users/tkhs623/following{/other_user}",
"gists_url": "https://api.github.com/users/tkhs623/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tkhs623/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tkhs623/subscriptions",
"organizations_url": "https://api.github.com/users/tkhs623/orgs",
"repos_url": "https://api.github.com/users/tkhs623/repos",
"events_url": "https://api.github.com/users/tkhs623/events{/privacy}",
"received_events_url": "https://api.github.com/users/tkhs623/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=h1) Report\n> Merging [#4000](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4000 +/- ##\n==========================================\n- Coverage 78.44% 78.44% -0.01% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14526 -1 \n- Misses 3991 3992 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=footer). Last update [4e817ff...23041d2](https://codecov.io/gh/huggingface/transformers/pull/4000?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"There's no option to do `--do_lower_case` in the scripts anymore, the issue is that this particular tokenizer should have a `tokenizer_config.json` that always applies lowercasing.",
"The option `--do_lower_case` seems to be available in the [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L579-L581), currently (but not applied in the official readme).\r\nIt's my understanding that `--do_lower_case` option should be applied, in order to reproduce the official results described in readme (only uncased models' examples).\r\n\r\nOr do you mean that `--do_lower_case` should not be used in the current (or future) version?",
"Yes, it shouldn’t be used in current or future versions of the scripts (because it’s handled in tokenizer configuration)",
"In your case, you should apply it manually for now, while we fix the root issue",
"Thank you for your answers!\r\nI understand the issue and its solution, and so I will be closing this pull request."
] | 1,587 | 1,588 | 1,588 | NONE | null | In SQuAD 1.1 using the uncased BERT model, **--do_lower_case** option must be used for initialization of a tokenizer.
Omitting this optional argument, the performance in development set would be lower around 8-9pt in both EM and F1 (In the official setting described in readme, the actual metrics were 72.09 (EM)/81.84 (F1) we confirmed). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4000/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4000",
"html_url": "https://github.com/huggingface/transformers/pull/4000",
"diff_url": "https://github.com/huggingface/transformers/pull/4000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4000.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3999/comments | https://api.github.com/repos/huggingface/transformers/issues/3999/events | https://github.com/huggingface/transformers/issues/3999 | 607,163,369 | MDU6SXNzdWU2MDcxNjMzNjk= | 3,999 | How do I convert T5 checkpoint to pytorch bin to utilize fine-tuned model from finetune.py? | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Sorry to answer this late - can you try the following:\r\n```python\r\nmodel.from_pretrained(<path_to_model.ckpt>, from_tf=True) #even though you don't use TF\r\nmodel.save_pretrained(\"./\") # should be in .bin format\r\n```\r\n\r\nIf this does not work try the following:\r\n```python \r\nparser = argparse.ArgumentParser()\r\nadd_generic_args(parser, os.getcwd())\r\nparser = SummarizationTrainer.add_model_specific_args(parser, os.getcwd())\r\nargs = parser.parse_args()\r\nmodel = SummarizationTrainer(args)\r\nmodel = model.load_from_checkpoint(<path_to_your_checkpoint>)\r\nmodel.model.save_pretrained(\"./\") # should be in bin format\r\n```\r\n\r\nPlease do let me know if this works.",
"Actually someone else found a much better solution than I did :-) \r\nSee: https://github.com/huggingface/transformers/issues/4144",
"Awesome this worked! Thanks so much @patrickvonplaten "
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | # ❓ Questions & Help
I've recently tried to fine-tune T5 using bart's [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/finetune.py) (supports T5 fine-tuning as well). The model is saved in a `ckpt` format when I wanted it as a pytorch `bin`, so I found this [page](https://huggingface.co/transformers/converting_tensorflow_models.html) in the docs on how to convert `ckpt` to `bin`. However, I don't see the T5 model in this page.
Is it possible to use the CLI to convert T5 models?
I also have questions around fine-tuning T5 but will discuss details in a separate issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3999/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3998/comments | https://api.github.com/repos/huggingface/transformers/issues/3998/events | https://github.com/huggingface/transformers/issues/3998 | 607,150,434 | MDU6SXNzdWU2MDcxNTA0MzQ= | 3,998 | Question about the output linear weight | {
"login": "snakeztc",
"id": 1688939,
"node_id": "MDQ6VXNlcjE2ODg5Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1688939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snakeztc",
"html_url": "https://github.com/snakeztc",
"followers_url": "https://api.github.com/users/snakeztc/followers",
"following_url": "https://api.github.com/users/snakeztc/following{/other_user}",
"gists_url": "https://api.github.com/users/snakeztc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snakeztc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snakeztc/subscriptions",
"organizations_url": "https://api.github.com/users/snakeztc/orgs",
"repos_url": "https://api.github.com/users/snakeztc/repos",
"events_url": "https://api.github.com/users/snakeztc/events{/privacy}",
"received_events_url": "https://api.github.com/users/snakeztc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | I notice that the output linear weight for BertForMaskedLM is the same as input embedding. I don't read anywhere in the original paper that the output projection is tied with word embedding? Can you please explain the design choice here?
To reproduce:
```
m = transformers.BertForMaskedLM.from_pretrained('bert-base-uncased')
m.get_input_embeddings().weight - m.get_output_embeddings().weight
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3998/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3997/comments | https://api.github.com/repos/huggingface/transformers/issues/3997/events | https://github.com/huggingface/transformers/issues/3997 | 607,142,840 | MDU6SXNzdWU2MDcxNDI4NDA= | 3,997 | Toy size models versions for faster experimentation and testing | {
"login": "obsh",
"id": 1974420,
"node_id": "MDQ6VXNlcjE5NzQ0MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1974420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/obsh",
"html_url": "https://github.com/obsh",
"followers_url": "https://api.github.com/users/obsh/followers",
"following_url": "https://api.github.com/users/obsh/following{/other_user}",
"gists_url": "https://api.github.com/users/obsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/obsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/obsh/subscriptions",
"organizations_url": "https://api.github.com/users/obsh/orgs",
"repos_url": "https://api.github.com/users/obsh/repos",
"events_url": "https://api.github.com/users/obsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/obsh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🚀 Feature request
To have a small(toy) in terms of number of parameters versions of models e.g.:
- `gpt2-toy`
- `bert-toy-uncased`
etc.
Alternative: to have separate model type `toy-transformer`.
## Motivation
Working on the project and using transformers package our team noticed that it's difficult for some team members who had machines without GPU to experiment with code working with GPT2 model and to run tests for it. Also, it required significant resources on CI environment to run tests.
We've ended up by implementing a custom configuration for GPT2 model with a resulting model size ~10M parameters for local experimentation and testing.
I think it might be beneficial for the community to have tiny model versions in the package that weakens hardware requirements to start work and to run tests.
If it's unreasonable to implement tiny version for every model - having separate model type which implements same interfaces and have corresponding heads will also help to reach the same objective.
## Your contribution
If either of the options sounds reasonable I can participate in the implementation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3997/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3997/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3996/comments | https://api.github.com/repos/huggingface/transformers/issues/3996/events | https://github.com/huggingface/transformers/pull/3996 | 607,125,019 | MDExOlB1bGxSZXF1ZXN0NDA5MTY2MzY1 | 3,996 | [DialoGPT] add dialogpt training tips | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3996/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3996/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3996",
"html_url": "https://github.com/huggingface/transformers/pull/3996",
"diff_url": "https://github.com/huggingface/transformers/pull/3996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3996.patch",
"merged_at": 1588077151000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3995/comments | https://api.github.com/repos/huggingface/transformers/issues/3995/events | https://github.com/huggingface/transformers/pull/3995 | 607,119,317 | MDExOlB1bGxSZXF1ZXN0NDA5MTYyMjQw | 3,995 | [model_cards] Add model card for Hindi-Bert | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"[Model page](https://huggingface.co/monsoon-nlp/hindi-bert)"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Trained with Google ELECTRA and Hindi Corpus (OSCAR CommonCrawl and latest Hindi Wikipedia)
README offers more details and links to notebooks showing pretraining and finetuning on movie reviews | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3995/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3995",
"html_url": "https://github.com/huggingface/transformers/pull/3995",
"diff_url": "https://github.com/huggingface/transformers/pull/3995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3995.patch",
"merged_at": 1588026317000
} |
https://api.github.com/repos/huggingface/transformers/issues/3994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3994/comments | https://api.github.com/repos/huggingface/transformers/issues/3994/events | https://github.com/huggingface/transformers/pull/3994 | 607,114,724 | MDExOlB1bGxSZXF1ZXN0NDA5MTU4ODU2 | 3,994 | Allow a more backward compatible behavior of max_len_single_sentence and max_len_sentences_pair | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | MEMBER | null | Since #3706, `max_len_single_sentence` and `max_len_sentences_pair` are now automatically setup in the base class.
This PR allows the user to try to setup `max_len_single_sentence` and `max_len_sentences_pair`. If the value if the same as the pre-computed, we display a deprecation warning (avoid breaking old code in this case). If the value is different we raise an explicit error message.
cc @HendrikStrobelt | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3994",
"html_url": "https://github.com/huggingface/transformers/pull/3994",
"diff_url": "https://github.com/huggingface/transformers/pull/3994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3994.patch",
"merged_at": 1588115639000
} |
https://api.github.com/repos/huggingface/transformers/issues/3993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3993/comments | https://api.github.com/repos/huggingface/transformers/issues/3993/events | https://github.com/huggingface/transformers/pull/3993 | 607,107,286 | MDExOlB1bGxSZXF1ZXN0NDA5MTUzNDMz | 3,993 | [Generation] Generation should allow to start with empty prompt | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | MEMBER | null | This PR allows generation from an empty input. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3993/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3993/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3993",
"html_url": "https://github.com/huggingface/transformers/pull/3993",
"diff_url": "https://github.com/huggingface/transformers/pull/3993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3993.patch",
"merged_at": 1588077196000
} |
https://api.github.com/repos/huggingface/transformers/issues/3992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3992/comments | https://api.github.com/repos/huggingface/transformers/issues/3992/events | https://github.com/huggingface/transformers/issues/3992 | 607,102,577 | MDU6SXNzdWU2MDcxMDI1Nzc= | 3,992 | RuntimeError: Creating MTGP constants failed. at /opt/conda/conda-bld/pytorch_1549287501208/work/aten/src/THC/THCTensorRandom.cu:35 | {
"login": "drjosephliu",
"id": 22230085,
"node_id": "MDQ6VXNlcjIyMjMwMDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22230085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drjosephliu",
"html_url": "https://github.com/drjosephliu",
"followers_url": "https://api.github.com/users/drjosephliu/followers",
"following_url": "https://api.github.com/users/drjosephliu/following{/other_user}",
"gists_url": "https://api.github.com/users/drjosephliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drjosephliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drjosephliu/subscriptions",
"organizations_url": "https://api.github.com/users/drjosephliu/orgs",
"repos_url": "https://api.github.com/users/drjosephliu/repos",
"events_url": "https://api.github.com/users/drjosephliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/drjosephliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BertForSequenceClassification
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am performing multi-class (50 classes) classification on the Project Gutenberg dataset. The max text length is 1620 so I set max length at 2048. I'm also padding the text with 0s.
```
MAX_LEN = 2048
def get_encodings(texts):
token_ids = []
attention_masks = []
for text in texts:
token_id = tokenizer.encode(text, add_special_tokens=True, max_length=2048)
token_ids.append(token_id)
return token_ids
def pad_encodings(encodings):
return pad_sequences(encodings, maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post")
def get_attention_masks(padded_encodings):
attention_masks = []
for encoding in padded_encodings:
attention_mask = [int(token_id > 0) for token_id in encoding]
attention_masks.append(attention_mask)
return attention_masks
X_train = torch.tensor(train_encodings)
y_train = torch.tensor(train_df.author_id.values)
train_masks = torch.tensor(train_attention_masks)
X_test = torch.tensor(test_encodings)
y_test = torch.tensor(test_df.author_id.values)
test_masks = torch.tensor(test_attention_masks)
batch_size = 32
# Create the DataLoader for our training set.
train_data = TensorDataset(X_train, train_masks, y_train)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(X_test, test_masks, y_test)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
```
My model is setup like so:
```
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels = 50,
output_attentions = False,
output_hidden_states = False,
)
```
During training however, the line indicated below is throwing the error:
```
for step, batch in enumerate(train_dataloader):
b_texts = batch[0].to(device)
b_attention_masks = batch[1].to(device)
b_authors = batch[2].to(device) <---- ERROR
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Ubuntu 16.04
- Python version: 1.0.0
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: Yes. P4000
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3992/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3991/comments | https://api.github.com/repos/huggingface/transformers/issues/3991/events | https://github.com/huggingface/transformers/pull/3991 | 607,091,868 | MDExOlB1bGxSZXF1ZXN0NDA5MTQyMzI5 | 3,991 | Fix the typos | {
"login": "airsplay",
"id": 2796554,
"node_id": "MDQ6VXNlcjI3OTY1NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2796554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airsplay",
"html_url": "https://github.com/airsplay",
"followers_url": "https://api.github.com/users/airsplay/followers",
"following_url": "https://api.github.com/users/airsplay/following{/other_user}",
"gists_url": "https://api.github.com/users/airsplay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/airsplay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airsplay/subscriptions",
"organizations_url": "https://api.github.com/users/airsplay/orgs",
"repos_url": "https://api.github.com/users/airsplay/repos",
"events_url": "https://api.github.com/users/airsplay/events{/privacy}",
"received_events_url": "https://api.github.com/users/airsplay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's not a typo :)\r\n\r\nhttps://en.wikipedia.org/wiki/Interval_(mathematics)",
"Ohhh! I am not aware of this notation before. (only [X, Y) is in my knowledge-base). Thanks for the clarification."
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Fix the typos of "[0.0, 1.0[" --> "[0.0, 1.0]". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3991",
"html_url": "https://github.com/huggingface/transformers/pull/3991",
"diff_url": "https://github.com/huggingface/transformers/pull/3991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3991.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3990/comments | https://api.github.com/repos/huggingface/transformers/issues/3990/events | https://github.com/huggingface/transformers/issues/3990 | 607,085,934 | MDU6SXNzdWU2MDcwODU5MzQ= | 3,990 | Language generation not possible with roberta? | {
"login": "ysig",
"id": 28439529,
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysig",
"html_url": "https://github.com/ysig",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"repos_url": "https://api.github.com/users/ysig/repos",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"`Roberta` has not really been trained on generating text. Small models that work well for generation are `distilgpt2` and `gpt2` for example."
] | 1,587 | 1,588 | 1,588 | NONE | null | Hi,
I have a small question.
I finetuned a roberta model and then I was about to do generation and I understood this was not possible on my model, but only in a smaller collection of model, where most of them are quite huge.
Why is that? Is there another way to generate language with roberta?
Thanks in a advance!
PS: the scripts I used were: `run_language_modeling.py` and `run_generation.py` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3989/comments | https://api.github.com/repos/huggingface/transformers/issues/3989/events | https://github.com/huggingface/transformers/pull/3989 | 607,063,705 | MDExOlB1bGxSZXF1ZXN0NDA5MTIxMjQy | 3,989 | Model cards for KoELECTRA | {
"login": "monologg",
"id": 28896432,
"node_id": "MDQ6VXNlcjI4ODk2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28896432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monologg",
"html_url": "https://github.com/monologg",
"followers_url": "https://api.github.com/users/monologg/followers",
"following_url": "https://api.github.com/users/monologg/following{/other_user}",
"gists_url": "https://api.github.com/users/monologg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monologg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monologg/subscriptions",
"organizations_url": "https://api.github.com/users/monologg/orgs",
"repos_url": "https://api.github.com/users/monologg/repos",
"events_url": "https://api.github.com/users/monologg/events{/privacy}",
"received_events_url": "https://api.github.com/users/monologg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"That's great! Model pages: \r\nhttps://huggingface.co/monologg/koelectra-base-generator\r\nhttps://huggingface.co/monologg/koelectra-base-discriminator\r\n\r\ncc'ing @LysandreJik and @clarkkev for information"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Hi:)
I've recently uploaded `KoELECTRA`, pretrained ELECTRA model for Korean:)
Thanks for supporting this great library and S3 storage:)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3989",
"html_url": "https://github.com/huggingface/transformers/pull/3989",
"diff_url": "https://github.com/huggingface/transformers/pull/3989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3989.patch",
"merged_at": 1588026062000
} |
https://api.github.com/repos/huggingface/transformers/issues/3988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3988/comments | https://api.github.com/repos/huggingface/transformers/issues/3988/events | https://github.com/huggingface/transformers/pull/3988 | 607,054,720 | MDExOlB1bGxSZXF1ZXN0NDA5MTE0NDA3 | 3,988 | [Trainer] Add more schedules to trainer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=h1) Report\n> Merging [#3988](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3988 +/- ##\n==========================================\n- Coverage 78.45% 78.40% -0.06% \n==========================================\n Files 111 111 \n Lines 18521 18533 +12 \n==========================================\n- Hits 14531 14530 -1 \n- Misses 3990 4003 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `42.42% <9.09%> (-1.18%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `88.40% <100.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `91.89% <0.00%> (-8.11%)` | :arrow_down: |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `93.87% <0.00%> (-2.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=footer). Last update [cb3c221...851b8f3](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'd rather let the user pass an optimizer and a scheduler in the Trainer's `__init__` and only keep the default one (which was the one used in all the scripts) in the implementation. Otherwise there's really a combinatorial explosion of possibilities.\r\n\r\nWhat do you think?",
"Yeah it's true that this could lead to a \"combinatorial explosion\" but I don't really mind that because:\r\n1. If we set good default values (like the beta ones for Adam), and have a good description I don't feel like the user is affected by more and more parameters\r\n2. I think if a user does want to change very special params (like the beta ones), then it's very convenient to be able to do it directly\r\n\r\nWhat I already started in this PR which can become messy is that there are now parameters that are only relevant if other parameters are set in a certain way `num_cycles` is only relevant if there is a cosine scheduler. And it's true that the trainer class could become messier this way as well.\r\n\r\nIn the end, for me the use case of the trainer class is more important I guess. If the idea is to have a very easy and fast way to train a model, then I would not mind introducing a bunch of parameters with good default values and description. If instead it should be a bit more \"low-level\" and the `run_language_modeling` script should wrap this class into a faster interface then it might be better to keep clean. ",
"Closing this due to cafa6a9e29f3e99c67a1028f8ca779d439bc0689"
] | 1,587 | 1,588 | 1,588 | MEMBER | null | Adds all available schedules to trainer and also add beta1 and beta2 for Adam to args.
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3988/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3988",
"html_url": "https://github.com/huggingface/transformers/pull/3988",
"diff_url": "https://github.com/huggingface/transformers/pull/3988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3988.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3987/comments | https://api.github.com/repos/huggingface/transformers/issues/3987/events | https://github.com/huggingface/transformers/pull/3987 | 607,012,610 | MDExOlB1bGxSZXF1ZXN0NDA5MDgzNzgx | 3,987 | fix output_dir / tokenizer_name confusion | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, this isn't correct. This was consistent with other scripts (before they were ported to the new Trainer in #3800) where we always saved the model and its tokenizer before re-loading it for eval \r\n\r\nSo I think that code is currently right. (but again, should be re-written to Trainer pretty soon)"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Fixing #3950
It seems that `output_dir` has been used in place of `tokenizer_name` in `run_xnli.py`. I have corrected this typo.
I am not that familiar with code style in this repo, in a lot of instances I used:
`args.tokenizer_name if args.tokenizer_name else args.model_name_or_path`
Instead, I believe I could have set `args.tokenizer_name = args.tokenizer_name if args.tokenizer_name else args.model_name_or_path` somewhere at the start of the script, so that the code could be cleaner below. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3987",
"html_url": "https://github.com/huggingface/transformers/pull/3987",
"diff_url": "https://github.com/huggingface/transformers/pull/3987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3987.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3986/comments | https://api.github.com/repos/huggingface/transformers/issues/3986/events | https://github.com/huggingface/transformers/issues/3986 | 607,011,665 | MDU6SXNzdWU2MDcwMTE2NjU= | 3,986 | Run Multiple Choice failure with ImportError: cannot import name 'AutoModelForMultipleChoice' | {
"login": "leixiaofeng-astar",
"id": 61817895,
"node_id": "MDQ6VXNlcjYxODE3ODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/61817895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leixiaofeng-astar",
"html_url": "https://github.com/leixiaofeng-astar",
"followers_url": "https://api.github.com/users/leixiaofeng-astar/followers",
"following_url": "https://api.github.com/users/leixiaofeng-astar/following{/other_user}",
"gists_url": "https://api.github.com/users/leixiaofeng-astar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leixiaofeng-astar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leixiaofeng-astar/subscriptions",
"organizations_url": "https://api.github.com/users/leixiaofeng-astar/orgs",
"repos_url": "https://api.github.com/users/leixiaofeng-astar/repos",
"events_url": "https://api.github.com/users/leixiaofeng-astar/events{/privacy}",
"received_events_url": "https://api.github.com/users/leixiaofeng-astar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to install transformers from source as specified in the README"
] | 1,587 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Run the example, it returns error as below. The version in April 17 doesn't have the problem.
python ./examples/run_multiple_choice.py \
> --task_name swag \
> --model_name_or_path roberta-base \
> --do_train \
> --do_eval \
> --data_dir $SWAG_DIR \
> --learning_rate 5e-5 \
> --num_train_epochs 3 \
> --max_seq_length 80 \
> --output_dir models_bert/swag_base \
> --per_gpu_eval_batch_size=16 \
> --per_gpu_train_batch_size=16 \
> --gradient_accumulation_steps 2 \
> --overwrite_output
Traceback (most recent call last):
File "./examples/run_multiple_choice.py", line 26, in <module>
from transformers import (
ImportError: cannot import name 'AutoModelForMultipleChoice'
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Follow the exact steps in README.md
1. Download the transformer with git clone
2. git clone https://github.com/rowanz/swagaf.git
3. export SWAG_DIR=/path/to/swag_data_dir
python ./examples/run_multiple_choice.py \
--task_name swag \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_gpu_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
***** Eval results *****
eval_acc = 0.8338998300509847
eval_loss = 0.44457291918821606
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: ubuntu 18.04
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3985/comments | https://api.github.com/repos/huggingface/transformers/issues/3985/events | https://github.com/huggingface/transformers/issues/3985 | 606,998,741 | MDU6SXNzdWU2MDY5OTg3NDE= | 3,985 | Using the T5 model with huggingface's mask-fill pipeline | {
"login": "p-christ",
"id": 26346243,
"node_id": "MDQ6VXNlcjI2MzQ2MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-christ",
"html_url": "https://github.com/p-christ",
"followers_url": "https://api.github.com/users/p-christ/followers",
"following_url": "https://api.github.com/users/p-christ/following{/other_user}",
"gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-christ/subscriptions",
"organizations_url": "https://api.github.com/users/p-christ/orgs",
"repos_url": "https://api.github.com/users/p-christ/repos",
"events_url": "https://api.github.com/users/p-christ/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-christ/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Correct me if I'm wrong @patrickvonplaten, but I don't think T5 is trained on masked language modeling (and does not have a mask token) so will not work with this pipeline.",
"Yeah, `T5` is not trained on the conventional \"Bert-like\" masked language modeling objective. It does a special encoder-decoder masked language modeling (see docs [here](https://huggingface.co/transformers/model_doc/t5.html#training)), but this is not really supported in combination with the `mask-fill` pipeline at the moment.",
"Hi @patrickvonplaten, is there any plan to support `T5` with the `mask-fill` pipeline in the near future?",
"`T5` is an encoder-decoder model so I don't really see it as a fitting model for the `mask-fill` task. ",
"Could we use the following workaround?\r\n\r\n* `<extra_id_0>` could be considered as a mask token\r\n* Candidate sequences for the mask-token could be generated using a code, like: \r\n```python\r\nfrom transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration\r\n\r\nT5_PATH = 't5-base' # \"t5-small\", \"t5-base\", \"t5-large\", \"t5-3b\", \"t5-11b\"\r\n\r\nDEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU\r\n\r\nt5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)\r\nt5_config = T5Config.from_pretrained(T5_PATH)\r\nt5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)\r\n\r\n# Input text\r\ntext = 'India is a <extra_id_0> of the world. </s>'\r\n\r\nencoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')\r\ninput_ids = encoded['input_ids'].to(DEVICE)\r\n\r\n# Generaing 20 sequences with maximum length set to 5\r\noutputs = t5_mlm.generate(input_ids=input_ids, \r\n num_beams=200, num_return_sequences=20,\r\n max_length=5)\r\n\r\n_0_index = text.index('<extra_id_0>')\r\n_result_prefix = text[:_0_index]\r\n_result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>\r\n\r\ndef _filter(output, end_token='<extra_id_1>'):\r\n # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)\r\n _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)\r\n if end_token in _txt:\r\n _end_token_index = _txt.index(end_token)\r\n return _result_prefix + _txt[:_end_token_index] + _result_suffix\r\n else:\r\n return _result_prefix + _txt + _result_suffix\r\n\r\nresults = list(map(_filter, outputs))\r\nresults\r\n```\r\nOutput:\r\n```\r\n['India is a cornerstone of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a huge part of the world. </s>',\r\n 'India is a big part of the world. </s>',\r\n 'India is a beautiful part of the world. </s>',\r\n 'India is a very important part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a unique part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a beautiful country in of the world. </s>',\r\n 'India is a part of the of the world. </s>',\r\n 'India is a small part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a country in the of the world. </s>',\r\n 'India is a large part of the world. </s>',\r\n 'India is a part of the world. </s>',\r\n 'India is a significant part of the world. </s>',\r\n 'India is a part of the world. </s>']\r\n```",
"@girishponkiya Thanks for your example! Unfortunately, I can't reproduce your results. I get\r\n\r\n```\r\n['India is a _0> of the world. </s>',\r\n 'India is a ⁇ extra of the world. </s>',\r\n 'India is a India is of the world. </s>',\r\n 'India is a ⁇ extra_ of the world. </s>',\r\n 'India is a a of the world. </s>',\r\n 'India is a [extra_ of the world. </s>',\r\n 'India is a India is an of the world. </s>',\r\n 'India is a of the world of the world. </s>',\r\n 'India is a India. of the world. </s>',\r\n 'India is a is a of the world. </s>',\r\n 'India is a India ⁇ of the world. </s>',\r\n 'India is a Inde is of the world. </s>',\r\n 'India is a ] of the of the world. </s>',\r\n 'India is a . of the world. </s>',\r\n 'India is a _0 of the world. </s>',\r\n 'India is a is ⁇ of the world. </s>',\r\n 'India is a india is of the world. </s>',\r\n 'India is a India is the of the world. </s>',\r\n 'India is a -0> of the world. </s>',\r\n 'India is a ⁇ _ of the world. </s>']\r\n```\r\n\r\nTried on CPU, GPU, 't5-base' and 't5-3b' — same thing.",
"Could you please mention the version of torch, transformers and tokenizers? \n\nI used the followings:\n* torch: 1.5.0+cu101\r\n* transformers: 2.8.0\r\n* tokenizers: 0.7.0\n\n`tokenizers` in the latest version of `transformers` has a bug. Looking at your output, I believe you are using a buggy version of tokenizers. ",
"@girishponkiya I'm using \r\n\r\n```\r\ntransformers 2.9.0\r\ntokenizers 0.7.0\r\ntorch 1.4.0\r\n```\r\n\r\nTried tokenizers-0.5.2 transformers-2.8.0 — now it works, thank you!",
"Thanks to @takahiro971. He pointed out this bug in #4021. ",
"@girishponkiya thanks a lot for your above code. Your example works but if I run your above code instead with the text :\r\n\r\n`text = \"<extra_id_0> came to power after defeating Stalin\"\r\n`\r\n\r\nI get the following error:\r\n\r\n```\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in _generate_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask)\r\n 1354 # test that beam scores match previously calculated scores if not eos and batch_idx not done\r\n 1355 if eos_token_id is not None and all(\r\n-> 1356 (token_id % vocab_size).item() is not eos_token_id for token_id in next_tokens[batch_idx]\r\n 1357 ):\r\n 1358 assert torch.all(\r\n\r\nUnboundLocalError: local variable 'next_tokens' referenced before assignment\r\n```\r\n\r\nAny ideas of the cause?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Can I use multiple masking in a same sentence?",
"> Can I use multiple masking in a same sentence?\r\n\r\nIn this case, you need to set max_length higher or not set it at all.\r\nThis might produce more outputs than you need though.\r\n",
"outputs = t5_mlm.generate(input_ids=input_ids, \r\n num_beams=200, num_return_sequences=20,\r\n max_length=5)\r\n\r\nis very slow. How can I improve performance?",
"@jban-x3 I think you should try smaller values for `num_beams`",
"> Could we use the following workaround?\r\n> \r\n> * `<extra_id_0>` could be considered as a mask token\r\n> * Candidate sequences for the mask-token could be generated using a code, like:\r\n> \r\n> ```python\r\n> from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration\r\n> \r\n> T5_PATH = 't5-base' # \"t5-small\", \"t5-base\", \"t5-large\", \"t5-3b\", \"t5-11b\"\r\n> \r\n> DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU\r\n> \r\n> t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)\r\n> t5_config = T5Config.from_pretrained(T5_PATH)\r\n> t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)\r\n> \r\n> # Input text\r\n> text = 'India is a <extra_id_0> of the world. </s>'\r\n> \r\n> encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')\r\n> input_ids = encoded['input_ids'].to(DEVICE)\r\n> \r\n> # Generaing 20 sequences with maximum length set to 5\r\n> outputs = t5_mlm.generate(input_ids=input_ids, \r\n> num_beams=200, num_return_sequences=20,\r\n> max_length=5)\r\n> \r\n> _0_index = text.index('<extra_id_0>')\r\n> _result_prefix = text[:_0_index]\r\n> _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>\r\n> \r\n> def _filter(output, end_token='<extra_id_1>'):\r\n> # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)\r\n> _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)\r\n> if end_token in _txt:\r\n> _end_token_index = _txt.index(end_token)\r\n> return _result_prefix + _txt[:_end_token_index] + _result_suffix\r\n> else:\r\n> return _result_prefix + _txt + _result_suffix\r\n> \r\n> results = list(map(_filter, outputs))\r\n> results\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> ```\r\n> ['India is a cornerstone of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a huge part of the world. </s>',\r\n> 'India is a big part of the world. </s>',\r\n> 'India is a beautiful part of the world. </s>',\r\n> 'India is a very important part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a unique part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a beautiful country in of the world. </s>',\r\n> 'India is a part of the of the world. </s>',\r\n> 'India is a small part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a country in the of the world. </s>',\r\n> 'India is a large part of the world. </s>',\r\n> 'India is a part of the world. </s>',\r\n> 'India is a significant part of the world. </s>',\r\n> 'India is a part of the world. </s>']\r\n> ```\r\n\r\nNice tool! \r\nmay I ask a question?\r\nwhat if I have several masked places to predict?\r\n\r\n```python\r\nt5('India is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)\r\n```\r\nI got:\r\n[{'generated_text': 'part world'}]\r\n\r\nIt seems only the first masked place is predicted.",
"> > Could we use the following workaround?\r\n> > \r\n> > * `<extra_id_0>` could be considered as a mask token\r\n> > * Candidate sequences for the mask-token could be generated using a code, like:\r\n> > \r\n> > ```python\r\n> > from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration\r\n> > \r\n> > T5_PATH = 't5-base' # \"t5-small\", \"t5-base\", \"t5-large\", \"t5-3b\", \"t5-11b\"\r\n> > \r\n> > DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU\r\n> > \r\n> > t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)\r\n> > t5_config = T5Config.from_pretrained(T5_PATH)\r\n> > t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)\r\n> > \r\n> > # Input text\r\n> > text = 'India is a <extra_id_0> of the world. </s>'\r\n> > \r\n> > encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')\r\n> > input_ids = encoded['input_ids'].to(DEVICE)\r\n> > \r\n> > # Generaing 20 sequences with maximum length set to 5\r\n> > outputs = t5_mlm.generate(input_ids=input_ids, \r\n> > num_beams=200, num_return_sequences=20,\r\n> > max_length=5)\r\n> > \r\n> > _0_index = text.index('<extra_id_0>')\r\n> > _result_prefix = text[:_0_index]\r\n> > _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>\r\n> > \r\n> > def _filter(output, end_token='<extra_id_1>'):\r\n> > # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)\r\n> > _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)\r\n> > if end_token in _txt:\r\n> > _end_token_index = _txt.index(end_token)\r\n> > return _result_prefix + _txt[:_end_token_index] + _result_suffix\r\n> > else:\r\n> > return _result_prefix + _txt + _result_suffix\r\n> > \r\n> > results = list(map(_filter, outputs))\r\n> > results\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Output:\r\n> > ```\r\n> > ['India is a cornerstone of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a huge part of the world. </s>',\r\n> > 'India is a big part of the world. </s>',\r\n> > 'India is a beautiful part of the world. </s>',\r\n> > 'India is a very important part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a unique part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a beautiful country in of the world. </s>',\r\n> > 'India is a part of the of the world. </s>',\r\n> > 'India is a small part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a country in the of the world. </s>',\r\n> > 'India is a large part of the world. </s>',\r\n> > 'India is a part of the world. </s>',\r\n> > 'India is a significant part of the world. </s>',\r\n> > 'India is a part of the world. </s>']\r\n> > ```\r\n> \r\n> Nice tool! may I ask a question? what if I have several masked places to predict?\r\n> \r\n> ```python\r\n> t5('India is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)\r\n> ```\r\n> \r\n> I got: [{'generated_text': 'part world'}]\r\n> \r\n> It seems only the first masked place is predicted.\r\n\r\nSeems like <extra_id_0> is filled with \"part\", and <extra_id_1> is filled with \"world\"?",
"Hi @beyondguo,\r\n\r\n> may I ask a question?\r\n> what if I have several masked places to predict?\r\n> ```python\r\n> t5('India` is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)\r\n> ```\r\n\r\nYou need to modify the `_filter` function. \r\n\r\nThe (original) function in [my code](https://github.com/huggingface/transformers/issues/3985#issuecomment-622981083) extracts the word sequence between `<extra_id_0>` and `<extra_id_1>` from the output, and the word sequence is being used to replace the `<extra_id_0>` token in the input. In addition to it, you have to extract the word sequence between `<extra_id_1>` and `<extra_id_2>` to replace the `<extra_id_1>` token from the input. Now, as the system needs to generate more tokens, one needs to set a bigger value for the `max_length` argument of `t5_mlm.generate`.",
"How is it possible to use the workaround with target words like in the pipeline? \r\nE.g. targets=[\"target1\", \"target2\"] and get the probabilities definitely for those targets?"
] | 1,587 | 1,674 | 1,595 | NONE | null | Does anyone know if it is possible to use the T5 model with hugging face's mask-fill pipeline? The below is how you can do it using the default model but i can't seem to figure out how to do is using the T5 model specifically?
```
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
```
Trying this for example raises the error "TypeError: must be str, not NoneType" because nlp_fill.tokenizer.mask_token is None.
```
nlp_fill = pipeline('fill-mask',model="t5-base", tokenizer="t5-base")
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
```
Stack overflow [question](https://stackoverflow.com/questions/61408753/using-the-t5-model-with-huggingfaces-mask-fill-pipeline) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3985/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3984/comments | https://api.github.com/repos/huggingface/transformers/issues/3984/events | https://github.com/huggingface/transformers/issues/3984 | 606,976,098 | MDU6SXNzdWU2MDY5NzYwOTg= | 3,984 | why the accuracy is very low when we test on sentences one by one of the words input? | {
"login": "sqkika",
"id": 52687399,
"node_id": "MDQ6VXNlcjUyNjg3Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/52687399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sqkika",
"html_url": "https://github.com/sqkika",
"followers_url": "https://api.github.com/users/sqkika/followers",
"following_url": "https://api.github.com/users/sqkika/following{/other_user}",
"gists_url": "https://api.github.com/users/sqkika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sqkika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sqkika/subscriptions",
"organizations_url": "https://api.github.com/users/sqkika/orgs",
"repos_url": "https://api.github.com/users/sqkika/repos",
"events_url": "https://api.github.com/users/sqkika/events{/privacy}",
"received_events_url": "https://api.github.com/users/sqkika/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"when we were training, the accuracy count method is like this: \r\ninput: [0,1,2,3,4,5,...,29]\r\nlabel: [1,2,3,4,5,6,...,30]\r\nso we count the right hits in 30 seqs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # ❓ Questions & Help
hi, friends:
there is a problem , that we trained a good lm model on gpt2 and sequence len is 30,when we batch test in training with 30 seqlen, the accuracy can reach 90%. But when we test like this:
[1]->[2],[1,2]->[3],[1,2,3]-->[4], left is the input, and we hope it can predict the right, we found the accurcy is very low which is only 6%.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3984/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3983/comments | https://api.github.com/repos/huggingface/transformers/issues/3983/events | https://github.com/huggingface/transformers/issues/3983 | 606,973,650 | MDU6SXNzdWU2MDY5NzM2NTA= | 3,983 | MLM Loss not decreasing when pretraining Bert from scratch | {
"login": "JF-D",
"id": 30710061,
"node_id": "MDQ6VXNlcjMwNzEwMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/30710061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JF-D",
"html_url": "https://github.com/JF-D",
"followers_url": "https://api.github.com/users/JF-D/followers",
"following_url": "https://api.github.com/users/JF-D/following{/other_user}",
"gists_url": "https://api.github.com/users/JF-D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JF-D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JF-D/subscriptions",
"organizations_url": "https://api.github.com/users/JF-D/orgs",
"repos_url": "https://api.github.com/users/JF-D/repos",
"events_url": "https://api.github.com/users/JF-D/events{/privacy}",
"received_events_url": "https://api.github.com/users/JF-D/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | NONE | null | I want to pretrain bert base model from scratch with en-wikipedia dataset, since I havn't found a bookcorpus copy. The code I used was adapted from pytorch-pretrained-bert and Nvidia Megatron-LM. I can finetune BERT on SQuAD and get SOTA result. But for pretraining, the MLM loss stays around 7.3. I have no idea why this happens. Can someone offer some possible solutions? Great thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3983/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3982/comments | https://api.github.com/repos/huggingface/transformers/issues/3982/events | https://github.com/huggingface/transformers/issues/3982 | 606,972,234 | MDU6SXNzdWU2MDY5NzIyMzQ= | 3,982 | Can't install transformers from sources using poetry | {
"login": "simonepri",
"id": 3505087,
"node_id": "MDQ6VXNlcjM1MDUwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3505087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepri",
"html_url": "https://github.com/simonepri",
"followers_url": "https://api.github.com/users/simonepri/followers",
"following_url": "https://api.github.com/users/simonepri/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepri/subscriptions",
"organizations_url": "https://api.github.com/users/simonepri/orgs",
"repos_url": "https://api.github.com/users/simonepri/repos",
"events_url": "https://api.github.com/users/simonepri/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"In the meantime as a workaround, you can do the following:\r\n\r\n1) Fork huggingface/transformers\r\n2) On your fork remove this line https://github.com/huggingface/transformers/blob/97a375484c618496691982f62518130f294bb9a8/setup.py#L79\r\n3) Install your fork with poetry",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
When installing transformers from sources using poetry
```bash
poetry add git+https://github.com/huggingface/transformers.git
```
The following exception is thrown:
```bash
[InvalidRequirement]
Invalid requirement, parse error at "'extra =='"
```
This is a problem on poetry side. The issue here is just to cross-link the problem.
Ref: https://github.com/python-poetry/poetry/issues/2326
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3982/reactions",
"total_count": 7,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/3982/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3981/comments | https://api.github.com/repos/huggingface/transformers/issues/3981/events | https://github.com/huggingface/transformers/pull/3981 | 606,969,240 | MDExOlB1bGxSZXF1ZXN0NDA5MDUyNjIw | 3,981 | Improve split on token | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=h1) Report\n> Merging [#3981](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3981 +/- ##\n==========================================\n- Coverage 78.44% 78.42% -0.03% \n==========================================\n Files 111 111 \n Lines 18518 18513 -5 \n==========================================\n- Hits 14527 14518 -9 \n- Misses 3991 3995 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.78% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.94% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=footer). Last update [4e817ff...410c538](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,594 | 1,594 | CONTRIBUTOR | null | This patch makes the code shorter, also solves a bug that is suppressed already[1]
old code:
split_on_token(tok = '[MASK]', text='')
> ['[MASK]']
This is a bug, because split_on_token shouldn't add token when the input doesn't have the token
[1] 21451ec (handle string with only whitespaces as empty, 2019-12-06)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3981",
"html_url": "https://github.com/huggingface/transformers/pull/3981",
"diff_url": "https://github.com/huggingface/transformers/pull/3981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3981.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3980/comments | https://api.github.com/repos/huggingface/transformers/issues/3980/events | https://github.com/huggingface/transformers/issues/3980 | 606,963,060 | MDU6SXNzdWU2MDY5NjMwNjA= | 3,980 | When I run `import transformers` , it reports an error. But I had no problem with pyrotch-transformers before. Is it a TensorRT problem? | {
"login": "songyingxin",
"id": 13884292,
"node_id": "MDQ6VXNlcjEzODg0Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/13884292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songyingxin",
"html_url": "https://github.com/songyingxin",
"followers_url": "https://api.github.com/users/songyingxin/followers",
"following_url": "https://api.github.com/users/songyingxin/following{/other_user}",
"gists_url": "https://api.github.com/users/songyingxin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songyingxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songyingxin/subscriptions",
"organizations_url": "https://api.github.com/users/songyingxin/orgs",
"repos_url": "https://api.github.com/users/songyingxin/repos",
"events_url": "https://api.github.com/users/songyingxin/events{/privacy}",
"received_events_url": "https://api.github.com/users/songyingxin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
when i run `import transformers`, i get this:
2020-04-26 17:02:59.210145: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
*** Error in `/conda-torch/bin/python': double free or corruption (!prev): 0x00007faa0eac23c0 ***
is there anything can fix it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3980/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3979/comments | https://api.github.com/repos/huggingface/transformers/issues/3979/events | https://github.com/huggingface/transformers/pull/3979 | 606,905,802 | MDExOlB1bGxSZXF1ZXN0NDA5MDA3MTc2 | 3,979 | Add modelcard for Hate-speech-CNERG/dehatebert-mono-arabic model | {
"login": "SaiSakethAluru",
"id": 21140068,
"node_id": "MDQ6VXNlcjIxMTQwMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/21140068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaiSakethAluru",
"html_url": "https://github.com/SaiSakethAluru",
"followers_url": "https://api.github.com/users/SaiSakethAluru/followers",
"following_url": "https://api.github.com/users/SaiSakethAluru/following{/other_user}",
"gists_url": "https://api.github.com/users/SaiSakethAluru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaiSakethAluru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaiSakethAluru/subscriptions",
"organizations_url": "https://api.github.com/users/SaiSakethAluru/orgs",
"repos_url": "https://api.github.com/users/SaiSakethAluru/repos",
"events_url": "https://api.github.com/users/SaiSakethAluru/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaiSakethAluru/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3979/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3979",
"html_url": "https://github.com/huggingface/transformers/pull/3979",
"diff_url": "https://github.com/huggingface/transformers/pull/3979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3979.patch",
"merged_at": 1588025935000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3978/comments | https://api.github.com/repos/huggingface/transformers/issues/3978/events | https://github.com/huggingface/transformers/pull/3978 | 606,904,727 | MDExOlB1bGxSZXF1ZXN0NDA5MDA2Mzg2 | 3,978 | Fix t5 doc typos | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great thanks you @enzoampil "
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Read through the docs for T5 at `docs/source/model_doc/t5.rst` and found some typos. Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3978/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3978",
"html_url": "https://github.com/huggingface/transformers/pull/3978",
"diff_url": "https://github.com/huggingface/transformers/pull/3978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3978.patch",
"merged_at": 1588004836000
} |
https://api.github.com/repos/huggingface/transformers/issues/3977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3977/comments | https://api.github.com/repos/huggingface/transformers/issues/3977/events | https://github.com/huggingface/transformers/issues/3977 | 606,903,449 | MDU6SXNzdWU2MDY5MDM0NDk= | 3,977 | xlnet large | {
"login": "gogokre",
"id": 44871498,
"node_id": "MDQ6VXNlcjQ0ODcxNDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44871498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gogokre",
"html_url": "https://github.com/gogokre",
"followers_url": "https://api.github.com/users/gogokre/followers",
"following_url": "https://api.github.com/users/gogokre/following{/other_user}",
"gists_url": "https://api.github.com/users/gogokre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gogokre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gogokre/subscriptions",
"organizations_url": "https://api.github.com/users/gogokre/orgs",
"repos_url": "https://api.github.com/users/gogokre/repos",
"events_url": "https://api.github.com/users/gogokre/events{/privacy}",
"received_events_url": "https://api.github.com/users/gogokre/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe the task is learning rate and batch size sensitive. You can try to decrease the learning rate when changing the batch size from 8 to 2. ",
"> Maybe the task is learning rate and batch size sensitive. You can try to decrease the learning rate when changing the batch size from 8 to 2.\r\n\r\nUsing xlnet-large-cased with batch size 2 and learning rate = 1e-5 is the same symptom. Is there another way?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | If you train in batch 8 at max length = 512 with the xlnet-base-cased, you will learn well with changes in values such as Loss, Accuracy, Precision, Recall, and F1-score.
However, if you train with batch 2 at max length = 512 with a xlnet-large-cased, even if you run several times, there is almost no change, and precision is always 1.00000, and other values are only fixed.
Is it because the batch size is too small? Help.
[epoch1]
Tr Loss | Vld Acc | Vld Loss | Vld Prec | Vld Reca | Vld F1
0.39480 | 0.87500 | 0.39765 | 1.00000 | 0.87500 | 0.93333
[epoch2]
Tr Loss | Vld Acc | Vld Loss | Vld Prec | Vld Reca | Vld F1
0.39772 | 0.87500 | 0.38215 | 1.00000 | 0.87500 | 0.93333 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3977/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3976/comments | https://api.github.com/repos/huggingface/transformers/issues/3976/events | https://github.com/huggingface/transformers/pull/3976 | 606,897,497 | MDExOlB1bGxSZXF1ZXN0NDA5MDAxMTA1 | 3,976 | Fixed Style Inconsistency | {
"login": "jtaylor351",
"id": 28931962,
"node_id": "MDQ6VXNlcjI4OTMxOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/28931962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtaylor351",
"html_url": "https://github.com/jtaylor351",
"followers_url": "https://api.github.com/users/jtaylor351/followers",
"following_url": "https://api.github.com/users/jtaylor351/following{/other_user}",
"gists_url": "https://api.github.com/users/jtaylor351/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jtaylor351/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtaylor351/subscriptions",
"organizations_url": "https://api.github.com/users/jtaylor351/orgs",
"repos_url": "https://api.github.com/users/jtaylor351/repos",
"events_url": "https://api.github.com/users/jtaylor351/events{/privacy}",
"received_events_url": "https://api.github.com/users/jtaylor351/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=h1) Report\n> Merging [#3976](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3976 +/- ##\n==========================================\n- Coverage 78.44% 78.43% -0.02% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14525 -2 \n- Misses 3991 3993 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=footer). Last update [4e817ff...6e5ef43](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @jtaylor351 !"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | This fixes a style inconsistency in BertForSequenceClassification's constructor where 'config' is referenced using both the parameter 'config' and 'self.config' on the same line.
This line is the only instance in modeling_bert.py where the config parameter is referenced using self.config in a constructor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3976",
"html_url": "https://github.com/huggingface/transformers/pull/3976",
"diff_url": "https://github.com/huggingface/transformers/pull/3976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3976.patch",
"merged_at": 1588249990000
} |
https://api.github.com/repos/huggingface/transformers/issues/3975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3975/comments | https://api.github.com/repos/huggingface/transformers/issues/3975/events | https://github.com/huggingface/transformers/pull/3975 | 606,870,428 | MDExOlB1bGxSZXF1ZXN0NDA4OTgxMTg1 | 3,975 | Added support for pathlib.Path objects instead of string paths in from_pretrained() (resolves #3962) | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=h1) Report\n> Merging [#3975](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `85.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3975 +/- ##\n=======================================\n Coverage 78.44% 78.45% \n=======================================\n Files 111 111 \n Lines 18518 18534 +16 \n=======================================\n+ Hits 14527 14541 +14 \n- Misses 3991 3993 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.57% <50.00%> (-0.10%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.68% <75.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `87.80% <100.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.10% <100.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.95% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.50% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.73% <100.00%> (+0.01%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=footer). Last update [4e817ff...033d5f8](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"On second thought, I probably would't merge this. I think it's better to just let the user deal with this on their end.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,598 | 1,598 | CONTRIBUTOR | null | Resolves issue/feature request #3962
This only covers string paths in `from_pretrained` (so vocab_file paths for` __init__` for tokenizers aren't covered). This may or may not be a feature that is desired (since it's fairly easy for the client to convert a path-like to a string). With the changes, you would be able to use `from_pretrained` with path-likes:
```
from pathlib import Path
from transformers import AutoModel, AutoTokenizer
my_path = Path("path/to/model_dir/")
AutoTokenizer.from_pretrained(my_path)
AutoModel.from_pretrained(my_path)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3975",
"html_url": "https://github.com/huggingface/transformers/pull/3975",
"diff_url": "https://github.com/huggingface/transformers/pull/3975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3975.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3974/comments | https://api.github.com/repos/huggingface/transformers/issues/3974/events | https://github.com/huggingface/transformers/issues/3974 | 606,831,221 | MDU6SXNzdWU2MDY4MzEyMjE= | 3,974 | Weights from pretrained model not used in GPT2LMHeadModel | {
"login": "ankit-chadha",
"id": 52430440,
"node_id": "MDQ6VXNlcjUyNDMwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/52430440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankit-chadha",
"html_url": "https://github.com/ankit-chadha",
"followers_url": "https://api.github.com/users/ankit-chadha/followers",
"following_url": "https://api.github.com/users/ankit-chadha/following{/other_user}",
"gists_url": "https://api.github.com/users/ankit-chadha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankit-chadha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankit-chadha/subscriptions",
"organizations_url": "https://api.github.com/users/ankit-chadha/orgs",
"repos_url": "https://api.github.com/users/ankit-chadha/repos",
"events_url": "https://api.github.com/users/ankit-chadha/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankit-chadha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten - could you help elaborate on this?",
"Yeah, we need to clean this logging. The weights are correctly loaded into the model as far as I know (we have integration tests for pretrained GPT2 models so the weights have to be correctly loaded). Just need to clean up the logging logic there.",
"Actually this logging info is fine. Those masked bias are saved values using the `register_buffer()` method and don't need to be loaded."
] | 1,587 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): gpt2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. trained gpt2 using run_language_modeling
2. using run_generation.py
`backend_1 | 04/25/2020 18:37:27 - INFO - transformers.modeling_utils - Weights from pretrained model not u
sed in GPT2LMHeadModel: ['transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.2.a
ttn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.5.attn.mas
ked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.8.attn.masked_bia
s', 'transformer.h.9.attn.masked_bias', 'transformer.h.10.attn.masked_bias', 'transformer.h.11.attn.masked_bias']`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3974/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3973/comments | https://api.github.com/repos/huggingface/transformers/issues/3973/events | https://github.com/huggingface/transformers/pull/3973 | 606,818,116 | MDExOlB1bGxSZXF1ZXN0NDA4OTQzMDUy | 3,973 | Pytorch 1.5.0 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=h1) Report\n> Merging [#3973](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3973 +/- ##\n==========================================\n- Coverage 78.44% 78.43% -0.02% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14525 -2 \n- Misses 3991 3993 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=footer). Last update [4e817ff...4bee514](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"~~I've seen one error with 1.5 and the new trainer interface (using ner) - I'll raise an issue for that :)~~"
] | 1,587 | 1,588 | 1,588 | MEMBER | null | PyTorch 1.5.0 doesn't allow specifying a standard deviation of 0 in normal distributions. We were testing that our models initialized with a normal distribution with a mean and a standard deviation of 0 had all their parameters initialized to 0.
This allows us to verify that all weights in the model are indeed initialized according to the specified weights initializations.
We now specify a tiny value (1e10) and round to the 9th decimal so that all these values are set to 0.
closes #3947
closes #3872 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3973",
"html_url": "https://github.com/huggingface/transformers/pull/3973",
"diff_url": "https://github.com/huggingface/transformers/pull/3973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3973.patch",
"merged_at": 1588688582000
} |
https://api.github.com/repos/huggingface/transformers/issues/3972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3972/comments | https://api.github.com/repos/huggingface/transformers/issues/3972/events | https://github.com/huggingface/transformers/issues/3972 | 606,817,688 | MDU6SXNzdWU2MDY4MTc2ODg= | 3,972 | Sized Fill-in-the-blank or Multi Mask filling with T5 | {
"login": "ramsrigouthamg",
"id": 1754080,
"node_id": "MDQ6VXNlcjE3NTQwODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1754080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ramsrigouthamg",
"html_url": "https://github.com/ramsrigouthamg",
"followers_url": "https://api.github.com/users/ramsrigouthamg/followers",
"following_url": "https://api.github.com/users/ramsrigouthamg/following{/other_user}",
"gists_url": "https://api.github.com/users/ramsrigouthamg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ramsrigouthamg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ramsrigouthamg/subscriptions",
"organizations_url": "https://api.github.com/users/ramsrigouthamg/orgs",
"repos_url": "https://api.github.com/users/ramsrigouthamg/repos",
"events_url": "https://api.github.com/users/ramsrigouthamg/events{/privacy}",
"received_events_url": "https://api.github.com/users/ramsrigouthamg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you discover any way of doing this?",
"@edmar Nothing solid yet! I Will update if I do.",
"Also looking for a solution\r\nhttps://github.com/google-research/text-to-text-transfer-transformer/issues/133",
"@franz101 @edmar \r\nThe closest thing I could come up with : \r\n\r\n```\r\nfrom transformers import RobertaTokenizer, RobertaForMaskedLM\r\nimport torch\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\nmodel = RobertaForMaskedLM.from_pretrained('roberta-base')\r\n\r\nsentence = \"Tom has fully <mask> <mask> <mask> illness.\"\r\n\r\n\r\ntoken_ids = tokenizer.encode(sentence, return_tensors='pt')\r\n# print(token_ids)\r\ntoken_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')\r\nprint(token_ids_tk)\r\n\r\n\r\nmasked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()\r\nmasked_pos = [mask.item() for mask in masked_position ]\r\nprint (masked_pos)\r\n\r\n\r\nwith torch.no_grad():\r\n output = model(token_ids)\r\n\r\nlast_hidden_state = output[0].squeeze()\r\n\r\nprint (\"\\n\\n\")\r\nprint (\"sentence : \",sentence)\r\nprint (\"\\n\")\r\nlist_of_list =[]\r\nfor mask_index in masked_pos:\r\n mask_hidden_state = last_hidden_state[mask_index]\r\n idx = torch.topk(mask_hidden_state, k=5, dim=0)[1]\r\n words = [tokenizer.decode(i.item()).strip() for i in idx]\r\n list_of_list.append(words)\r\n print (words)\r\n \r\nbest_guess = \"\"\r\nfor j in list_of_list:\r\n best_guess = best_guess+\" \"+j[0]\r\n\r\nprint (\"\\nBest guess for fill in the blank :::\",best_guess)\r\n```\r\n\r\nThe output is : \r\n['Tom', 'Ġhas', 'Ġfully', '<mask>', '<mask>', '<mask>', 'Ġillness', '.']\r\n[4, 5, 6]\r\n\r\nsentence : Tom has fully <mask> <mask> <mask> illness.\r\n\r\n['recovered', 'returned', 'recover', 'healed', 'cleared']\r\n['from', 'his', 'with', 'to', 'the']\r\n['his', 'the', 'her', 'mental', 'this']\r\n\r\nBest guess for fill in the blank ::: recovered from his",
"@ramsrigouthamg @edmar @franz101 Any update on how to do that ?",
"@Diego999 The above-provided answer is the best I have. \r\nGoogle hasn't released the pretrained multi-mask fill model.",
"@ramsrigouthamg Thanks! This is also similar to BART architecture where they mask a span of text. A similar thread is available here https://github.com/huggingface/transformers/issues/4984 ",
"@Diego999 There are few things that you can possibly explore. There is Spanbert https://huggingface.co/SpanBERT/spanbert-base-cased but I didn't explore on how to use it.\r\nThen there is Google pegasus that is trained with sentences as masks. https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html",
"This example might be useful:\r\nhttps://github.com/huggingface/transformers/issues/3985",
"@ramsrigouthamg Any luck with it yet on T5?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think the `__4__` is a special token, and they use the same text to text framework (seq2seq, instead of masked-lm) to train this task. \r\nAnd that's why the screenshot above says:\r\n> train it to fill in the blank with **approximately** 4 words",
"is there any updates or simple example code for doing this? thanks!",
"```\r\nfrom transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration\r\n\r\nT5_PATH = 't5-base' # \"t5-small\", \"t5-base\", \"t5-large\", \"t5-3b\", \"t5-11b\"\r\n\r\nDEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU\r\n\r\nt5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)\r\nt5_config = T5Config.from_pretrained(T5_PATH)\r\nt5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)\r\n\r\n# Input text\r\ntext = 'India is a <extra_id_0> of the world. </s>'\r\n\r\nencoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')\r\ninput_ids = encoded['input_ids'].to(DEVICE)\r\n\r\n# Generaing 20 sequences with maximum length set to 5\r\noutputs = t5_mlm.generate(input_ids=input_ids, \r\n num_beams=200, num_return_sequences=20,\r\n max_length=5)\r\n\r\n_0_index = text.index('<extra_id_0>')\r\n_result_prefix = text[:_0_index]\r\n_result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>\r\n\r\ndef _filter(output, end_token='<extra_id_1>'):\r\n # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)\r\n _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)\r\n if end_token in _txt:\r\n _end_token_index = _txt.index(end_token)\r\n return _result_prefix + _txt[:_end_token_index] + _result_suffix\r\n else:\r\n return _result_prefix + _txt + _result_suffix\r\n\r\nresults = list(map(_filter, outputs))\r\nresults\r\n```",
"does the above work? "
] | 1,587 | 1,637 | 1,604 | CONTRIBUTOR | null | In the Google T5 paper they mentioned :
For example, with the input, “I love peanut butter and _4_ sandwiches,” the outputs looked like:
I love peanut butter and jelly, which is what makes good sandwiches.
How do I achieve Multi Mask filling with T5 in hugging face transformers?
Code samples please :)

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3972/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3971/comments | https://api.github.com/repos/huggingface/transformers/issues/3971/events | https://github.com/huggingface/transformers/issues/3971 | 606,813,515 | MDU6SXNzdWU2MDY4MTM1MTU= | 3,971 | Problem with downloading the XLNetSequenceClassification pretrained xlnet-large-cased | {
"login": "tienpham-dtp",
"id": 47011738,
"node_id": "MDQ6VXNlcjQ3MDExNzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/47011738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tienpham-dtp",
"html_url": "https://github.com/tienpham-dtp",
"followers_url": "https://api.github.com/users/tienpham-dtp/followers",
"following_url": "https://api.github.com/users/tienpham-dtp/following{/other_user}",
"gists_url": "https://api.github.com/users/tienpham-dtp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tienpham-dtp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tienpham-dtp/subscriptions",
"organizations_url": "https://api.github.com/users/tienpham-dtp/orgs",
"repos_url": "https://api.github.com/users/tienpham-dtp/repos",
"events_url": "https://api.github.com/users/tienpham-dtp/events{/privacy}",
"received_events_url": "https://api.github.com/users/tienpham-dtp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey mate, I've encountered the same issue. How did you solve it?",
"I encounter this issue. How can I solve this?",
"the problem is when l import xlnet from pytorch_transformers. instead, you should be importing it from the module called 'transformers'"
] | 1,587 | 1,588 | 1,587 | NONE | null | I've run this before and it was fine. However, this time, i keep encountering this runtime error. In fact, when i tried to initialize model = XLNetModel.from_pretrained('xlnet-large-cased') - it also gives me the 'negative dimension' error.
`model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2)
model.to(device)`
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-37-d6f698a3714b> in <module>()
----> 1 model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2)
2 model.to(device)
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight)
95 self.scale_grad_by_freq = scale_grad_by_freq
96 if _weight is None:
---> 97 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
98 self.reset_parameters()
99 else:
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 1024]` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3971/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3970/comments | https://api.github.com/repos/huggingface/transformers/issues/3970/events | https://github.com/huggingface/transformers/pull/3970 | 606,812,090 | MDExOlB1bGxSZXF1ZXN0NDA4OTM4NzI1 | 3,970 | Fix GLUE TPU script | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @jysohn23 to see if it fixed the issue.",
"Closing in favour of integrating TPU support for the trainer."
] | 1,587 | 1,651 | 1,588 | MEMBER | null | Temporary fix until we refactor this script to work with the trainer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3970",
"html_url": "https://github.com/huggingface/transformers/pull/3970",
"diff_url": "https://github.com/huggingface/transformers/pull/3970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3970.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3969/comments | https://api.github.com/repos/huggingface/transformers/issues/3969/events | https://github.com/huggingface/transformers/pull/3969 | 606,807,127 | MDExOlB1bGxSZXF1ZXN0NDA4OTM1MjUz | 3,969 | override weights name | {
"login": "ofrik",
"id": 6185779,
"node_id": "MDQ6VXNlcjYxODU3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6185779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofrik",
"html_url": "https://github.com/ofrik",
"followers_url": "https://api.github.com/users/ofrik/followers",
"following_url": "https://api.github.com/users/ofrik/following{/other_user}",
"gists_url": "https://api.github.com/users/ofrik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ofrik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ofrik/subscriptions",
"organizations_url": "https://api.github.com/users/ofrik/orgs",
"repos_url": "https://api.github.com/users/ofrik/repos",
"events_url": "https://api.github.com/users/ofrik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ofrik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=h1) Report\n> Merging [#3969](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `82.60%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3969 +/- ##\n==========================================\n+ Coverage 78.44% 78.45% +0.01% \n==========================================\n Files 111 111 \n Lines 18518 18527 +9 \n==========================================\n+ Hits 14527 14536 +9 \n Misses 3991 3991 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.81% <80.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.11% <84.61%> (+0.17%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=footer). Last update [4e817ff...56024ae](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,594 | 1,594 | NONE | null | Optional override of the weights name, relevant for cases where torch.save() is patched like in https://github.com/allegroai/trains | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3969",
"html_url": "https://github.com/huggingface/transformers/pull/3969",
"diff_url": "https://github.com/huggingface/transformers/pull/3969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3969.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3968/comments | https://api.github.com/repos/huggingface/transformers/issues/3968/events | https://github.com/huggingface/transformers/pull/3968 | 606,785,268 | MDExOlB1bGxSZXF1ZXN0NDA4OTE5Mzkw | 3,968 | Remove boto3 dependency | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=h1) Report\n> Merging [#3968](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3968 +/- ##\n==========================================\n+ Coverage 78.44% 78.51% +0.07% \n==========================================\n Files 111 111 \n Lines 18518 18486 -32 \n==========================================\n- Hits 14527 14515 -12 \n+ Misses 3991 3971 -20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `72.61% <100.00%> (+3.74%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=footer). Last update [4e817ff...b440a19](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,588 | 1,588 | MEMBER | null | Downloading a model from `s3://` urls was not documented anywhere and I suspect it doesn't work with our `from_pretrained` methods anyways.
Removing boto3 also kills its transitive dependencies (some of which are slow to support new versions of Python), contributing to a leaner library:
```
boto3-1.12.46
botocore-1.15.46
docutils-0.15.2
jmespath-0.9.5
python-dateutil-2.8.1
s3transfer-0.3.3
six-1.14.0
urllib3-1.25.9
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3968",
"html_url": "https://github.com/huggingface/transformers/pull/3968",
"diff_url": "https://github.com/huggingface/transformers/pull/3968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3968.patch",
"merged_at": 1588000635000
} |
https://api.github.com/repos/huggingface/transformers/issues/3967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3967/comments | https://api.github.com/repos/huggingface/transformers/issues/3967/events | https://github.com/huggingface/transformers/pull/3967 | 606,774,696 | MDExOlB1bGxSZXF1ZXN0NDA4OTExNzAx | 3,967 | Proposal: saner num_labels in configs. | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I like this better as well. Probably won't be backwards compatible with users that have their config with `_num_labels` though.",
"Like it better as well! Agree with @LysandreJik that some configs might need to be fixed manually",
"Well, `_num_labels` (if present) should always be consistent with the length of `id2label`, no?",
"You're right, it should!",
"All right, merging this. @patrickvonplaten if you want to re-run your cleaning script at some point feel free to do it :) (let me know before)"
] | 1,587 | 1,588 | 1,588 | MEMBER | null | See https://github.com/guillaume-be/rust-bert/pull/21 for more context | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3967",
"html_url": "https://github.com/huggingface/transformers/pull/3967",
"diff_url": "https://github.com/huggingface/transformers/pull/3967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3967.patch",
"merged_at": 1588346936000
} |
https://api.github.com/repos/huggingface/transformers/issues/3966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3966/comments | https://api.github.com/repos/huggingface/transformers/issues/3966/events | https://github.com/huggingface/transformers/pull/3966 | 606,728,187 | MDExOlB1bGxSZXF1ZXN0NDA4ODc3Njk2 | 3,966 | Add CALBERT (Catalan ALBERT) base-uncased model card | {
"login": "txus",
"id": 83234,
"node_id": "MDQ6VXNlcjgzMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/83234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/txus",
"html_url": "https://github.com/txus",
"followers_url": "https://api.github.com/users/txus/followers",
"following_url": "https://api.github.com/users/txus/following{/other_user}",
"gists_url": "https://api.github.com/users/txus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/txus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/txus/subscriptions",
"organizations_url": "https://api.github.com/users/txus/orgs",
"repos_url": "https://api.github.com/users/txus/repos",
"events_url": "https://api.github.com/users/txus/events{/privacy}",
"received_events_url": "https://api.github.com/users/txus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Looks good! Model page for [CALBERT](https://huggingface.co/codegram/calbert-base-uncased)",
"One additional tweak: you should add a \r\n\r\n```\r\n---\r\nlanguage: catalan\r\n---\r\n```\r\n\r\nmetadata block at the top of the model page. Thanks!"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Hi there!
I just uploaded a new ALBERT model pretrained on 4.3 GB of Catalan text. Thank you for these fantastic libraries and platform to make this a joy! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3966",
"html_url": "https://github.com/huggingface/transformers/pull/3966",
"diff_url": "https://github.com/huggingface/transformers/pull/3966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3966.patch",
"merged_at": 1587820601000
} |
https://api.github.com/repos/huggingface/transformers/issues/3965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3965/comments | https://api.github.com/repos/huggingface/transformers/issues/3965/events | https://github.com/huggingface/transformers/pull/3965 | 606,707,384 | MDExOlB1bGxSZXF1ZXN0NDA4ODYyNzky | 3,965 | Remove hard-coded pad token id in distilbert and albert | {
"login": "monologg",
"id": 28896432,
"node_id": "MDQ6VXNlcjI4ODk2NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/28896432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monologg",
"html_url": "https://github.com/monologg",
"followers_url": "https://api.github.com/users/monologg/followers",
"following_url": "https://api.github.com/users/monologg/following{/other_user}",
"gists_url": "https://api.github.com/users/monologg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monologg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monologg/subscriptions",
"organizations_url": "https://api.github.com/users/monologg/orgs",
"repos_url": "https://api.github.com/users/monologg/repos",
"events_url": "https://api.github.com/users/monologg/events{/privacy}",
"received_events_url": "https://api.github.com/users/monologg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=h1) Report\n> Merging [#3965](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73d6a2f9019960c327f19689c1d9a6c0fba31d86&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3965 +/- ##\n==========================================\n- Coverage 78.45% 78.44% -0.02% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14528 14526 -2 \n- Misses 3990 3992 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.31% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.15% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.94% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=footer). Last update [73d6a2f...9d92a30](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM",
"@LysandreJik @VictorSanh \r\n\r\nHi:) Can you please check this PR? This one makes issue on Korean BERT. (which use `pad_token_id=1` and `unk_token_id=0`)\r\n\r\nI hope this PR will be applied on the next version of transformers library:)",
"lgtm!",
"@julien-c \r\n\r\nCan you merge this PR? Thank you so much:)"
] | 1,587 | 1,589 | 1,589 | CONTRIBUTOR | null | As the config adds `pad_token_id` attribute, `padding_idx` set the value of `config.pad_token_id` in BertEmbedding. ( PR #3793 )
But it seems that not only the config of `Bert`, but also that of `DistilBert` and `Albert` has `pad_token_id`. ([Distilbert config](https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-config.json), [Albert config](https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-v1-config.json))
But in Embedding class of Distilbert and Albert, it seems that `padding_idx` is still hard-coded. So I've fixed those parts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3965",
"html_url": "https://github.com/huggingface/transformers/pull/3965",
"diff_url": "https://github.com/huggingface/transformers/pull/3965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3965.patch",
"merged_at": 1589286765000
} |
https://api.github.com/repos/huggingface/transformers/issues/3964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3964/comments | https://api.github.com/repos/huggingface/transformers/issues/3964/events | https://github.com/huggingface/transformers/issues/3964 | 606,703,246 | MDU6SXNzdWU2MDY3MDMyNDY= | 3,964 | Error on dtype in modeling_bertabs.py file | {
"login": "pn12",
"id": 64300791,
"node_id": "MDQ6VXNlcjY0MzAwNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/64300791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pn12",
"html_url": "https://github.com/pn12",
"followers_url": "https://api.github.com/users/pn12/followers",
"following_url": "https://api.github.com/users/pn12/following{/other_user}",
"gists_url": "https://api.github.com/users/pn12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pn12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pn12/subscriptions",
"organizations_url": "https://api.github.com/users/pn12/orgs",
"repos_url": "https://api.github.com/users/pn12/repos",
"events_url": "https://api.github.com/users/pn12/events{/privacy}",
"received_events_url": "https://api.github.com/users/pn12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
Traceback (most recent call last):
File "run_summarizationn.py", line 324, in <module>
main()
File "run_summarizationn.py", line 309, in main
evaluate(args)
File "run_summarizationn.py", line 84, in evaluate
batch_data = predictor.translate_batch(batch)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 797, in translate_batch
return self._fast_translate_batch(batch, self.max_length, min_length=self.min_length)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 844, in _fast_translate_batch
dec_out, dec_states = self.model.decoder(decoder_input, src_features, dec_states, step=step)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 231, in forward
step=step,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 328, in forward
dec_mask = torch.gt(tgt_pad_mask + self.mask[:, : tgt_pad_mask.size(1), : tgt_pad_mask.size(1)], 0)
RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool
## Information
Model I am using (Bert, XLNet ...):bertabs
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below) - Yes
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below) - my own task
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3964/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3963/comments | https://api.github.com/repos/huggingface/transformers/issues/3963/events | https://github.com/huggingface/transformers/issues/3963 | 606,699,847 | MDU6SXNzdWU2MDY2OTk4NDc= | 3,963 | run_language_modeling, RuntimeError: expected scalar type Half but found Float | {
"login": "ankit-chadha",
"id": 52430440,
"node_id": "MDQ6VXNlcjUyNDMwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/52430440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankit-chadha",
"html_url": "https://github.com/ankit-chadha",
"followers_url": "https://api.github.com/users/ankit-chadha/followers",
"following_url": "https://api.github.com/users/ankit-chadha/following{/other_user}",
"gists_url": "https://api.github.com/users/ankit-chadha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankit-chadha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankit-chadha/subscriptions",
"organizations_url": "https://api.github.com/users/ankit-chadha/orgs",
"repos_url": "https://api.github.com/users/ankit-chadha/repos",
"events_url": "https://api.github.com/users/ankit-chadha/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankit-chadha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"What's your PyTorch version? Can you post the output of `transformers-cli env`\r\n\r\nThis does not seem specific to `examples/run_language_modeling.py` so I'm gonna ping @patrickvonplaten on this",
"@julien-c thanks for the prompt response.\r\n`- `transformers` version: 2.8.0\r\n- Platform: Linux-4.14.138+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.8\r\n- PyTorch version (GPU?): 1.3.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: no`",
"Hi @ankit-chadha, \r\n\r\nI will take a look next week :-). Maybe @sshleifer - do you something from the top of your head? This issue is also related: https://github.com/huggingface/transformers/issues/3676.",
"I have tried to replicate in torch 1.4 and mask, masked_bias are float16, and there is no error.\r\n\r\nWould it be possible to upgrade torch to 1.4 and see if the problem persists @ankit-chadha ?\r\n",
"@sshleifer @patrickvonplaten \r\n\r\n`- transformers version: 2.8.0\r\n- Platform: Linux-4.14.138+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.8\r\n- PyTorch version (GPU?): 1.4.0 (True)\r\n- Tensorflow version (GPU?): 2.0.0 (False)`\r\n\r\n````\r\nraceback (most recent call last): | 0/372 [00:00<?, ?it/s]\r\n File \"examples/run_language_modeling.py\", line 284, in <module>\r\n main()\r\n File \"examples/run_language_modeling.py\", line 254, in main\r\n trainer.train(model_path=model_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 314, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 388, in _training_step\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py\", line 152, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py\", line 162, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n output.reraise()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/_utils.py\", line 394, in reraise\r\n raise self.exc_type(msg)\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py\", line 616, in forward\r\n use_cache=use_cache,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py\", line 500, in forward\r\n use_cache=use_cache,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py\", line 237, in forward\r\n use_cache=use_cache,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py\", line 196, in forward\r\n attn_outputs = self._attn(query, key, value, attention_mask, head_mask)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py\", line 149, in _attn\r\n w = torch.where(mask, w, self.masked_bias)\r\nRuntimeError: expected scalar type Half but found Float\r\n```\r\n",
"Could you provide a small subset of your data so that I can reproduce?",
"@sshleifer - I am using the wiki.train.raw default dataset. "
] | 1,587 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
CUDA: 10.1
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
Installed transformers from source.
Trying to train **gpt2-medium**
Command Line:
```
export TRAIN_FILE=dataset/wiki.train.raw
export TEST_FILE=dataset/wiki.test.raw
python3 examples/run_language_modeling.py --fp16 \
--per_gpu_eval_batch_size 1 \
--output_dir=output \
--model_type=gpt2-medium \
--model_name_or_path=gpt2-medium \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--overwrite_output_dir
```
```
Traceback (most recent call last):
File "examples/run_language_modeling.py", line 284, in <module>
main()
File "examples/run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 314, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 388, in _training_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 616, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 500, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 237, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 196, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 149, in _attn
w = torch.where(mask, w, self.masked_bias)
RuntimeError: expected scalar type Half but found Float
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3962/comments | https://api.github.com/repos/huggingface/transformers/issues/3962/events | https://github.com/huggingface/transformers/issues/3962 | 606,699,601 | MDU6SXNzdWU2MDY2OTk2MDE= | 3,962 | `BertTokenizer.from_pretrained()` not working with native Python `pathlib` module | {
"login": "macwanj",
"id": 32582237,
"node_id": "MDQ6VXNlcjMyNTgyMjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/32582237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macwanj",
"html_url": "https://github.com/macwanj",
"followers_url": "https://api.github.com/users/macwanj/followers",
"following_url": "https://api.github.com/users/macwanj/following{/other_user}",
"gists_url": "https://api.github.com/users/macwanj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/macwanj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/macwanj/subscriptions",
"organizations_url": "https://api.github.com/users/macwanj/orgs",
"repos_url": "https://api.github.com/users/macwanj/repos",
"events_url": "https://api.github.com/users/macwanj/events{/privacy}",
"received_events_url": "https://api.github.com/users/macwanj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Looking into the `BertTokenizer` code, it inherits the `from_pretrained` function so the real issue is coming from `PreTrainedTokenizer`. To confirm this, I tried loading the pretrained `roberta-base` tokenizer and reproduced the same error:\r\n\r\n```\r\nmodel_name = 'roberta-base'\r\ntokenizer = AutoTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)\r\nmodel = AutoModel.from_pretrained(PROJ_DIR/'models'/model_name)\r\n\r\nOutput:\r\nAttributeError: 'PosixPath' object has no attribute 'decode'\r\n```\r\nI actually also get the same error from the `AutoModel.from_pretrained()` line but oddly not when it's `BertModel.from_pretrained()`. Looking into the documentation for `from_pretrained` in `PreTrainedTokenizer` this [line](https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/tokenization_utils.py#L825) doesn't suggest that a Path object is a supported input anyways. You could just stringify the Path before passing it in:\r\n\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(str(PROJ_DIR/'models'/model_name))\r\nmodel = AutoModel.from_pretrained(str(PROJ_DIR/'models'/model_name))\r\n```\r\n\r\n",
"Yes, stringify works. I upgraded from transformers 2.1 (from conda-forge) to the latest yesterday and the older version of transformers seems to work with `pathlib`. This isn't a dealbreaker for sure, but many other mature Python libraries, such as pandas, scikit-learn etc. have consistent compatibility with pathlib so it would be a *nice-to-have* to see this consistency with transformers too across all functions and classes.",
"From `pandas.io.common` they use a function [`stringify_path`](https://github.com/pandas-dev/pandas/blob/77a0f19c53279f7b2bf86c9e33daae2030b16e51/pandas/io/common.py#L96-L126) to convert any possible path-like objects to a string. This really comes down to whether the conversion from`pathlib.Path` to `str` should be handled by transformers or on the client side @julien-c .",
" I think issue is resolved now. It should be closed now",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,596 | 1,596 | NONE | null | Consider this code that downloads models and tokenizers to disk and then uses `BertTokenizer.from_pretrained` to load the tokenizer from disk.
**ISSUE:** `BertTokenizer.from_pretrained()` does not seem to be compatible with Python's native [pathlib](https://docs.python.org/3/library/pathlib.html) module.
```
# -*- coding: utf-8 -*-
"""
Created on: 25-04-2020
Author: MacwanJ
ISSUE:
BertTokenizer.from_pretrained() is not compatible with pathlib constructs
"""
from pathlib import Path
import os
from transformers import BertModel, BertTokenizer
# enables proper path resolves when this script is run in terminal
# or in an interactive Python shell/notebook
try:
file_location = Path(__file__).parent.resolve()
except NameError:
file_location = Path.cwd().resolve()
PROJ_DIR = file_location.parent
#####################################################################
# DOWNLOAD MODELS & TOKENIZERS
#####################################################################
model_name = 'bert-base-uncased'
if not os.path.exists(PROJ_DIR/'models'/model_name):
print(model_name,
'folder does not exist. Creating folder now',
'and proceeding to download and save model')
os.makedirs(PROJ_DIR/'models'/model_name)
else:
print(model_name,
'folder already exists. Proceeding to download and save model')
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)
print('Download complete. Proceeding to save to disk')
model.save_pretrained(PROJ_DIR/'models'/model_name)
tokenizer.save_pretrained(PROJ_DIR/'models'/model_name)
print('Model saving complete')
#####################################################################
# LOAD MODEL & TOKENIZER FROM DISK
#####################################################################
model_name = 'bert-base-uncased'
# Load pre-trained model tokenizer (vocabulary)
# !! DOES NOT WORK UNLESS I CONVERT THE PATHLIB OBJECT TO STRING!!
tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)
# Load pre-trained model weights
model = BertModel.from_pretrained(PROJ_DIR/'models'/model_name,
output_hidden_states=True)
```
Running:
`tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)`
Yields this error:
```
Traceback (most recent call last):
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-94-eb0783e1626e>", line 1, in <module>
tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\tokenization_utils.py", line 424, in _from_pretrained
if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\file_utils.py", line 146, in is_remote_url
parsed = urlparse(url_or_filename)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 367, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 123, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 107, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 107, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'WindowsPath' object has no attribute 'decode'
```
All the other functions and methods appear to be compatible with pathlib except for `BertTokenizer.from_pretrained()`
It would be good to ensure consistency with pathlib constructs across all functions because pathlib makes it easier to specify paths that work out of the box across operating systems.
- `transformers` version: 2.8.0
- Platform: Windows 10
- Python version: 3.7.0
- PyTorch version (GPU?):Non-GPU 1.3.1
- Tensorflow version (GPU?):2.1.0 non gpu
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3962/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3961/comments | https://api.github.com/repos/huggingface/transformers/issues/3961/events | https://github.com/huggingface/transformers/issues/3961 | 606,692,999 | MDU6SXNzdWU2MDY2OTI5OTk= | 3,961 | There are some warnings when I used AdamW and pytorch1.5. | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"same here",
"This is not a bug but because pytorch 1.5 displays warnings that it didn't displayed before.",
"It may be this file https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L96\r\n\r\nYou can modify the code on those `add`, `addcdiv`, `addcmul` and use `alpha` or `value` keyword argument to eliminate these warnings. (or maybe submit a PR if that works!)\r\n\r\nSee pytorch doc: https://pytorch.org/docs/stable/torch.html#torch.add\r\n\r\npytorch/pytorch#32861",
"Hi is anyone submitting any PR for this issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3961/reactions",
"total_count": 18,
"+1": 18,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3961/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3960/comments | https://api.github.com/repos/huggingface/transformers/issues/3960/events | https://github.com/huggingface/transformers/issues/3960 | 606,674,081 | MDU6SXNzdWU2MDY2NzQwODE= | 3,960 | Question regarding glue examples | {
"login": "Mahmedturk",
"id": 48975334,
"node_id": "MDQ6VXNlcjQ4OTc1MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48975334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mahmedturk",
"html_url": "https://github.com/Mahmedturk",
"followers_url": "https://api.github.com/users/Mahmedturk/followers",
"following_url": "https://api.github.com/users/Mahmedturk/following{/other_user}",
"gists_url": "https://api.github.com/users/Mahmedturk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mahmedturk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mahmedturk/subscriptions",
"organizations_url": "https://api.github.com/users/Mahmedturk/orgs",
"repos_url": "https://api.github.com/users/Mahmedturk/repos",
"events_url": "https://api.github.com/users/Mahmedturk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mahmedturk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't understand the question. The labels for the test sets for GLUE are privately-held, if that's what you're asking.\r\n\r\n(Closing this as it's not really specific to this repo)"
] | 1,587 | 1,587 | 1,587 | NONE | null | hi,
I have used the run_glue.py for QQP dataset and the tested the saved model for a small test set. I get really high F1-score on dev set but get lower F1-score on the test set. My question is that, is the dev set used in glue examples is used to set parameters or is it a held out set?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3959/comments | https://api.github.com/repos/huggingface/transformers/issues/3959/events | https://github.com/huggingface/transformers/issues/3959 | 606,668,571 | MDU6SXNzdWU2MDY2Njg1NzE= | 3,959 | AttributeError: 'LambdaLR' object has no attribute 'get_last_lr' | {
"login": "wasiahmad",
"id": 17520413,
"node_id": "MDQ6VXNlcjE3NTIwNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasiahmad",
"html_url": "https://github.com/wasiahmad",
"followers_url": "https://api.github.com/users/wasiahmad/followers",
"following_url": "https://api.github.com/users/wasiahmad/following{/other_user}",
"gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions",
"organizations_url": "https://api.github.com/users/wasiahmad/orgs",
"repos_url": "https://api.github.com/users/wasiahmad/repos",
"events_url": "https://api.github.com/users/wasiahmad/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasiahmad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I encountered the same problem, have you solved it?",
"Upgrading to PyTorch 1.4 would solve that issue :)\r\n\r\nCurrent `master` branch recommends that:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/setup.py#L70",
"Thank you so much, I have solved it.",
"`get_last_lr` is introduced in pytorch 1.4.0 . Maybe you need to upgrade your pytorch.",
"If you really are stuck with PyTorch <= 1.3 please feel free to open a PR to fix this with backward compatibility",
"I upgraded pytorch to v1.5.0, but run_glue.py with Bert model failed. So, you may have to set it to 1.4.0 "
] | 1,587 | 1,589 | 1,589 | NONE | null | I am trying to run the summarization example using BART and getting the following error.
```
tqdm_dict = {"loss": "{:.3f}".format(avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
AttributeError: 'LambdaLR' object has no attribute 'get_last_lr'
```
The error occurred at this [line](https://github.com/huggingface/transformers/blob/master/examples/transformer_base.py#L101). I am using PyTorch 1.3.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3958/comments | https://api.github.com/repos/huggingface/transformers/issues/3958/events | https://github.com/huggingface/transformers/issues/3958 | 606,666,022 | MDU6SXNzdWU2MDY2NjYwMjI= | 3,958 | model.multiple_choice_head( ) function for Hugging Face GPT2 models | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Method's description says this\r\n\r\n\r\n _mc_token_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, num_choices)`, `optional`, default to index of the last token of the input)\r\n Index of the classification token in each input sequence.\r\n Selected in the range ``[0, input_ids.size(-1) - 1[``._\r\n \r\nSo, the default is the last token of the input."
] | 1,587 | 1,592 | 1,592 | NONE | null | Hello,
I just have a quick question about the ```multiple_choice_head( )``` function for the Hugging Face GPT2 models.
when I execute the codes below, everything runs smoothly:
```python
# get pre-trained GPT2Model and GPT2DoubleHeadsModel
model_gpt2 = GPT2Model.from_pretrained('gpt2', output_hidden_states = True)
model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2')
# some parts of the codes are missing....
# trying to make a use of the gpt2DoubleHeadsModel.multiple_choice_head() function
hidden_states = model_gpt2(input_ids=input_ids, token_type_ids = token_type_ids)[2][1][:,:,:]
mc_logits = model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states).detach()
loss_fct = CrossEntropyLoss()
mc_loss = loss_fct(mc_logits.view(1,4), 1)
```
Here, I noticed that I do not need to specify the ```mc_token_ids``` when executing the line ```model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states)```.
My question is, does the ```multiple_choice_head()``` function from the line ```model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states)``` automatically take the
**last token** of my input text sequence as the ```cls_token``` (classification token) that is used by the ```mc_head``` to predict answers for the multiple-choice questions?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3958/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3957/comments | https://api.github.com/repos/huggingface/transformers/issues/3957/events | https://github.com/huggingface/transformers/pull/3957 | 606,658,181 | MDExOlB1bGxSZXF1ZXN0NDA4ODI1MDMx | 3,957 | Allow the creation of "entity groups" for NerPipeline #3548 | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Was hoping this could get added to the next release! Would be very useful for me.",
"_Note for @LysandreJik and myself_: We need to rebase the PR before merging, I've made some changes to the way batching is handled on pipelines, I just want to be sure we're not missing anything through the unittests.\r\n\r\nOtherwise LGTM 👍 ",
"@mfuntowicz I've changed the `group` parameter to `grouped_entities` and have added corresponding unit tests for the \"grouped\" `NerPipeline` for both pytorch and tensorflow: `test_ner_grouped` and `test_tf_ner_grouped`.\r\n\r\nPlease let me know if that parameter name is acceptable or if you want me to change it!\r\n\r\nAlso, I noticed the tests failing but for tests outside the scope of this PR.",
"If you can rebase the PR against master, we can check the unittests (some failures are related to some recent changes we made) and then merge into master 👍 ",
"@mfuntowicz Rebased and got the tests to pass :smile:",
"Perfect, thanks for your contribution @enzoampil :)",
"Very welcome @mfuntowicz ! Looking forward to contributing more 🙂 ",
"Reminder that we also want to return char offsets here too ",
"@julien-c good point, can work on this in a separate PR over the weekend :)",
"@enzoampil That was more of an internal reminder for e.g. @mfuntowicz as this will be mostly built-in when we turn the flip on fast tokenizers",
"thank you @enzoampil ! this PR helped a lot for a downstream NER task. \r\n\r\nWould someone from hugging face (maybe @mfuntowicz) be able to update a README or document to reflect this change so that other folks can avoid some unnecessary work?",
"Yes, more doc would be awesome",
"For the cases of NER using BIO tagging. This algorithm of grouping entities will separate `B-Label` from `I-Label` which is not correct.",
"Could you provide an example where it fails so that we may adapt the algorithm? Thank you.",
"@LysandreJik \r\n\r\n`token_classifier = pipeline(\"ner\", model=model,aggregation_strategy='simple', tokenizer=tokenizer,grouped_entities=True)`\r\n\r\n```\r\n{'entity_group': 'B_name', 'score': 0.96226656, 'word': 'Pratik', 'start': 1141, 'end': 1149}\r\n{'entity_group': 'I_name', 'score': 0.9272271, 'word': 'kr', 'start': 1150, 'end': 1157}\r\n{'entity_group': 'L_name', 'score': 0.7290683, 'word': 'kumar', 'start': 1158, 'end': 1163}\r\n```\r\n\r\nIdeally, it should be grouped to just `name` ? How to achieve this?"
] | 1,587 | 1,665 | 1,589 | CONTRIBUTOR | null | ### This pull request applies the entity group transformation by setting the parameter: group=True.
This was done by reflecting the transformation inside NerPipeline. This is similar to a previously closed [PR](https://github.com/huggingface/transformers/pull/3607), which I closed because I accidentlly deleted my fork (apologies for my clumsiness).
Details of what I want to be able to do can be found in issue #3548.
cc @julien-c @mfuntowicz @petulla
Sample code:
```
# Install branch
# Make sure to restart runtime after installing if using Google Colab
!pip install -e git+git://github.com/enzoampil/transformers.git@add_index_to_ner_pipeline#egg=transformers
# Grouped NER
from transformers import pipeline
nlp = pipeline('ner', grouped_entities=True)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity_group': 'I-PER', 'score': 0.9968132972717285, 'word': 'Enzo'},
# {'entity_group': 'I-ORG', 'score': 0.9970400333404541, 'word': 'Australian National University'},
# {'entity_group': 'I-ORG', 'score': 0.9831967651844025, 'word': 'AUN'}]
# Ungrouped NER
nlp = pipeline('ner', grouped_entities=False)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity': 'I-PER', 'index': 1, 'score': 0.9983270168304443, 'word': 'En'},
# {'entity': 'I-PER', 'index': 2, 'score': 0.9952995777130127, 'word': '##zo'},
# {'entity': 'I-ORG', 'index': 6, 'score': 0.9984350204467773, 'word': 'Australian'},
# {'entity': 'I-ORG','index': 7, 'score': 0.9967807531356812, 'word': 'National'},
# {'entity': 'I-ORG', 'index': 8 'score': 0.9959043264389038, 'word': 'University'},
# {'entity': 'I-ORG', 'index': 10, 'score': 0.9900023937225342, 'word': 'AU'},
# {'entity': 'I-ORG', 'index': 11, 'score': 0.9763911366462708, 'word': '##N'}]
```
Tutorial on how to do Entity Grouping w/ NerPipeline [here](https://colab.research.google.com/drive/1CVLP0n3Q5t5qiWpode7jyhUNZpmLg0mS)
I'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3957/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3957/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3957",
"html_url": "https://github.com/huggingface/transformers/pull/3957",
"diff_url": "https://github.com/huggingface/transformers/pull/3957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3957.patch",
"merged_at": 1589700318000
} |
https://api.github.com/repos/huggingface/transformers/issues/3956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3956/comments | https://api.github.com/repos/huggingface/transformers/issues/3956/events | https://github.com/huggingface/transformers/issues/3956 | 606,611,082 | MDU6SXNzdWU2MDY2MTEwODI= | 3,956 | RuntimeError: Error(s) in loading state_dict for BertForTokenClassification | {
"login": "pz325",
"id": 538880,
"node_id": "MDQ6VXNlcjUzODg4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/538880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pz325",
"html_url": "https://github.com/pz325",
"followers_url": "https://api.github.com/users/pz325/followers",
"following_url": "https://api.github.com/users/pz325/following{/other_user}",
"gists_url": "https://api.github.com/users/pz325/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pz325/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pz325/subscriptions",
"organizations_url": "https://api.github.com/users/pz325/orgs",
"repos_url": "https://api.github.com/users/pz325/repos",
"events_url": "https://api.github.com/users/pz325/events{/privacy}",
"received_events_url": "https://api.github.com/users/pz325/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"An update: after upgrading to transformers 2.8.0, the reported issue is gone. \r\n\r\nNevertheless, could you share an explanation please? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Since about 1600 BST, 24 Apr 2020, loading ner pipeline gives RuntimeError:
```
from transformers import pipeline
nlp = pipeline("ner", ignore_labels=[])
```
> Traceback (most recent call last):
File "test.py", line 2, in <module>
nlp = pipeline("ner", ignore_labels=[])
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/pipelines.py", line 1091, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1086, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/modeling_utils.py", line 558, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([9, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([2]).
- `transformers` version: 2.5.1
- Platform: macos 10.14.6
- Python version: 3.6.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3956/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3955/comments | https://api.github.com/repos/huggingface/transformers/issues/3955/events | https://github.com/huggingface/transformers/pull/3955 | 606,587,662 | MDExOlB1bGxSZXF1ZXN0NDA4NzY4MTQ2 | 3,955 | Fix #3954 - GPT2 is not traceable | {
"login": "jazzcook15",
"id": 37391310,
"node_id": "MDQ6VXNlcjM3MzkxMzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/37391310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jazzcook15",
"html_url": "https://github.com/jazzcook15",
"followers_url": "https://api.github.com/users/jazzcook15/followers",
"following_url": "https://api.github.com/users/jazzcook15/following{/other_user}",
"gists_url": "https://api.github.com/users/jazzcook15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jazzcook15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jazzcook15/subscriptions",
"organizations_url": "https://api.github.com/users/jazzcook15/orgs",
"repos_url": "https://api.github.com/users/jazzcook15/repos",
"events_url": "https://api.github.com/users/jazzcook15/events{/privacy}",
"received_events_url": "https://api.github.com/users/jazzcook15/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Need to remove the `math` import to pass the Flake test.",
"Just deleted the math import to make the code quality check happy - Thanks a lot for the PR @jazzcook15 and for linking the issue @minimaxir "
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | This PR replaces the `math.sqrt(...)` computation with `(...)**0.5` which allows the model to be correctly traced by `torch.jit.trace`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3955",
"html_url": "https://github.com/huggingface/transformers/pull/3955",
"diff_url": "https://github.com/huggingface/transformers/pull/3955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3955.patch",
"merged_at": 1588101537000
} |
https://api.github.com/repos/huggingface/transformers/issues/3954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3954/comments | https://api.github.com/repos/huggingface/transformers/issues/3954/events | https://github.com/huggingface/transformers/issues/3954 | 606,582,061 | MDU6SXNzdWU2MDY1ODIwNjE= | 3,954 | GPT2 is not fully torch.jit.trace-able | {
"login": "sberardi-apple",
"id": 49169196,
"node_id": "MDQ6VXNlcjQ5MTY5MTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/49169196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sberardi-apple",
"html_url": "https://github.com/sberardi-apple",
"followers_url": "https://api.github.com/users/sberardi-apple/followers",
"following_url": "https://api.github.com/users/sberardi-apple/following{/other_user}",
"gists_url": "https://api.github.com/users/sberardi-apple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sberardi-apple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sberardi-apple/subscriptions",
"organizations_url": "https://api.github.com/users/sberardi-apple/orgs",
"repos_url": "https://api.github.com/users/sberardi-apple/repos",
"events_url": "https://api.github.com/users/sberardi-apple/events{/privacy}",
"received_events_url": "https://api.github.com/users/sberardi-apple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"see #3955 for a fix",
"Hi,\r\nI have the same issue even after updating to transformers 4.1.0. The results from the traced model are poor when compared to the original model. Following are the warnings returned when I try to create a traced version of a fine-tuned GPT2 model(using torch.jit.trace()).\r\n\r\n`/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:168: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n w = w / (float(v.size(-1)) ** 0.5)\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:173: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n mask = self.bias[:, :, ns - nd : ns, :ns]\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:\r\nWith rtol=1e-05 and atol=1e-05, found 254695 element(s) (out of 27897075) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 5.1975250244140625e-05 (2.250950336456299 vs. 2.251002311706543), which occurred at index (0, 69, 37541).\r\n _module_class,`\r\n\r\n**Environment Info**\r\nPython: 3.6.9\r\nPyTorch: 1.7\r\ntransformers: 4.1.0\r\n\r\nAny suggestions would be of great help.\r\n\r\nThanks.",
"It looks like subsequent changes to `modeling_gpt2.py` have added an explicit `float` cast in that calculation, and I think that's now what is causing the trace to be incorrect.",
"Should we have \"slow\" tests on this to avoid a regression @LysandreJik @sgugger ?",
"> It looks like subsequent changes to `modeling_gpt2.py` have added an explicit `float` cast in that calculation, and I think that's now what is causing the trace to be incorrect.\r\n\r\nThanks for your response. Is there any workaround for this? @jazzcook15 "
] | 1,587 | 1,610 | 1,588 | NONE | null | # 🐛 Bug
## Information
I'm trying to use PyTorch's tracing on a pre-trained GPT2 model and running into the following warning emitted from torch.jit.trace:
```
/opt/miniconda3/envs/py3/lib/python3.6/site-packages/transformers/modeling_gpt2.py:144: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / math.sqrt(v.size(-1))
```
Additionally, if I inspect the graph generated from the trace, I can see that the denominator in that division expression is a constant value, not one determined by the size of the tensor.
## To reproduce
The following snippet can be used to repro the problem (based on the example in the Quickstart guide)
```python
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2').eval()
example_input = torch.tensor([ tokenizer.encode("The Manhattan bridge")])
traced_model = torch.jit.trace(model, example_input)
```
## Environment info
- `transformers` version: 2.8.0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3953/comments | https://api.github.com/repos/huggingface/transformers/issues/3953/events | https://github.com/huggingface/transformers/pull/3953 | 606,524,096 | MDExOlB1bGxSZXF1ZXN0NDA4NzE4Njk0 | 3,953 | Fix BERT example code for NSP and Multiple Choice | {
"login": "siboehm",
"id": 14908678,
"node_id": "MDQ6VXNlcjE0OTA4Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/14908678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siboehm",
"html_url": "https://github.com/siboehm",
"followers_url": "https://api.github.com/users/siboehm/followers",
"following_url": "https://api.github.com/users/siboehm/following{/other_user}",
"gists_url": "https://api.github.com/users/siboehm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siboehm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siboehm/subscriptions",
"organizations_url": "https://api.github.com/users/siboehm/orgs",
"repos_url": "https://api.github.com/users/siboehm/repos",
"events_url": "https://api.github.com/users/siboehm/events{/privacy}",
"received_events_url": "https://api.github.com/users/siboehm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=h1) Report\n> Merging [#3953](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b290c32e1617fa74f44ccd8b83365fe764437be9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3953 +/- ##\n=======================================\n Coverage 78.39% 78.40% \n=======================================\n Files 120 120 \n Lines 19932 19932 \n=======================================\n+ Hits 15626 15627 +1 \n+ Misses 4306 4305 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=footer). Last update [b290c32...47a8b4b](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=h1) Report\n> Merging [#3953](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0f06210646a440509efa718b30d18322d6a830&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3953 +/- ##\n==========================================\n+ Coverage 78.16% 78.40% +0.23% \n==========================================\n Files 120 120 \n Lines 20058 19932 -126 \n==========================================\n- Hits 15679 15627 -52 \n+ Misses 4379 4305 -74 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <ø> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `62.50% <0.00%> (-26.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `80.95% <0.00%> (-6.38%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <0.00%> (-2.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.60% <0.00%> (-0.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `36.25% <0.00%> (-0.79%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.22% <0.00%> (-0.57%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <0.00%> (-0.39%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=footer). Last update [3e0f062...fa93925](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,587 | 1,590 | 1,590 | CONTRIBUTOR | null | After #3790 was merged I noticed that the examples for BERT NSP and Multiple Choice were wrong, too.
`token_type_id` wasn't being correctly set but that's necessary for the multi sequence tasks (NSP, Multiple Choice, Question Answering). So I changed it to use `encode_plus` / `batch_encode_plus` which takes care of that.
I'm correct in assuming that for Next Sentence Prediction the Linear Classifier isn't initialized with pretrained weights and has to be trained first, correct? I didn't find any mention of it in the code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3953/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3953/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3953",
"html_url": "https://github.com/huggingface/transformers/pull/3953",
"diff_url": "https://github.com/huggingface/transformers/pull/3953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3953.patch",
"merged_at": 1590767756000
} |
https://api.github.com/repos/huggingface/transformers/issues/3952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3952/comments | https://api.github.com/repos/huggingface/transformers/issues/3952/events | https://github.com/huggingface/transformers/issues/3952 | 606,511,581 | MDU6SXNzdWU2MDY1MTE1ODE= | 3,952 | BertForSequenceClassification is not optimum | {
"login": "RodSernaPerez",
"id": 37450380,
"node_id": "MDQ6VXNlcjM3NDUwMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37450380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RodSernaPerez",
"html_url": "https://github.com/RodSernaPerez",
"followers_url": "https://api.github.com/users/RodSernaPerez/followers",
"following_url": "https://api.github.com/users/RodSernaPerez/following{/other_user}",
"gists_url": "https://api.github.com/users/RodSernaPerez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RodSernaPerez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RodSernaPerez/subscriptions",
"organizations_url": "https://api.github.com/users/RodSernaPerez/orgs",
"repos_url": "https://api.github.com/users/RodSernaPerez/repos",
"events_url": "https://api.github.com/users/RodSernaPerez/events{/privacy}",
"received_events_url": "https://api.github.com/users/RodSernaPerez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | In the documentation about TFBertModel it says:
```
hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):
tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
```
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
But then in the code of BertForSequenceClassification:
```
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
```
Wouldn't give better results using `pooled_output = outputs[0][:,0]` ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3951/comments | https://api.github.com/repos/huggingface/transformers/issues/3951/events | https://github.com/huggingface/transformers/issues/3951 | 606,484,661 | MDU6SXNzdWU2MDY0ODQ2NjE= | 3,951 | run_xnli doesn't execute | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | # 🐛 Bug
## Information
(a related issue is this: #3950 )
I am trying to run the `run_xnli.py` script from the [Examples documentation](https://huggingface.co/transformers/examples.html#xnli), but I am getting an error. I am trying to run the script without training.
## To reproduce
Steps to reproduce the behavior:
I follow the [documentation](https://huggingface.co/transformers/examples.html#xnli)
My args are the following:
```
export XNLI_DIR=/path/to/xnli
python run_xnli.py \
--model_type bert \
--model_name_or_path bert-base-multilingual-uncased \
--language de \
--train_language en \
--do_eval \
--data_dir $XNLI_DIR \
--per_gpu_train_batch_size 32 \
--learning_rate 5e-5 \
--num_train_epochs 1.0 \
--max_seq_length 128 \
--output_dir bert-base-multilingual-uncased \
--save_steps -1
```
I get the following error:
```
Traceback (most recent call last):
File "run_xnli.py", line 646, in <module>
main()
File "run_xnli.py", line 638, in main
result = evaluate(args, model, tokenizer, prefix=prefix)
File "run_xnli.py", line 290, in evaluate
outputs = model(**inputs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1139, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2115, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 2 is out of bounds.
```
## Expected behavior
I am not sure I am afraid. I am expecting some sort of accuracy measure as shown in the Example.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3951/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3950/comments | https://api.github.com/repos/huggingface/transformers/issues/3950/events | https://github.com/huggingface/transformers/issues/3950 | 606,470,697 | MDU6SXNzdWU2MDY0NzA2OTc= | 3,950 | In run_xnli.py, output_dir seems to be used in place of tokenizer_name | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Could you show us with which arguments to do you run the script?",
"Hi!\r\n\r\nI am using the arguments as found in the Doc Example, except it is the uncased MultiBert, and the epochs are set to 1.0.\r\n\r\n```\r\nexport XNLI_DIR=/path/to/XNLI\r\n\r\npython run_xnli.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-base-multilingual-uncased \\\r\n --language de \\\r\n --train_language en \\\r\n --do_train \\\r\n --do_eval \\\r\n --data_dir $XNLI_DIR \\\r\n --per_gpu_train_batch_size 32 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 1.0 \\\r\n --max_seq_length 128 \\\r\n --output_dir /tmp/debug_xnli/ \\\r\n --save_steps -1\r\n```",
"To be honest, someone should rewrite this script according to #3800 \r\n\r\nIn case you want to do it @antmarakis (we can help) 😊",
"Hi!\r\n\r\nI am not sure if I could rewrite it right now, but I can make a start and others can then take it from there? I have already fixed the issue proposed here, I will get on refactoring the script next week if nobody else picks this up :).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🐛 Bug
## Information
I am trying to run the `run_xnli` example as found in the documentation. Unfortunately, I get a strange error were the script thinks the `output_dir` argument contains a model name.
It seems that `output_dir` has been used in place of `tokenizer_name` in some instances, such as this: `tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)`
## To reproduce
Steps to reproduce the behavior:
1. Follow the example as found here: https://huggingface.co/transformers/examples.html#xnli
I get the following error:
```
Traceback (most recent call last):
File "run_xnli.py", line 646, in <module>
main()
File "run_xnli.py", line 624, in main
tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 868, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 971, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name '/tmp/debug_xnli/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed '/tmp/debug_xnli/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3949/comments | https://api.github.com/repos/huggingface/transformers/issues/3949/events | https://github.com/huggingface/transformers/issues/3949 | 606,469,414 | MDU6SXNzdWU2MDY0Njk0MTQ= | 3,949 | After reading the tutorial I can use the BertModel to extract word embedding but how to use it extract the sentence embedding? | {
"login": "leopardv10",
"id": 37171521,
"node_id": "MDQ6VXNlcjM3MTcxNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/37171521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leopardv10",
"html_url": "https://github.com/leopardv10",
"followers_url": "https://api.github.com/users/leopardv10/followers",
"following_url": "https://api.github.com/users/leopardv10/following{/other_user}",
"gists_url": "https://api.github.com/users/leopardv10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leopardv10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leopardv10/subscriptions",
"organizations_url": "https://api.github.com/users/leopardv10/orgs",
"repos_url": "https://api.github.com/users/leopardv10/repos",
"events_url": "https://api.github.com/users/leopardv10/events{/privacy}",
"received_events_url": "https://api.github.com/users/leopardv10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [] | 1,587 | 1,593 | 1,593 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3949/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3948/comments | https://api.github.com/repos/huggingface/transformers/issues/3948/events | https://github.com/huggingface/transformers/pull/3948 | 606,458,671 | MDExOlB1bGxSZXF1ZXN0NDA4NjY1OTA2 | 3,948 | Add Type Hints to modeling_utils.py Closes #3911 | {
"login": "bglearning",
"id": 4636315,
"node_id": "MDQ6VXNlcjQ2MzYzMTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4636315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bglearning",
"html_url": "https://github.com/bglearning",
"followers_url": "https://api.github.com/users/bglearning/followers",
"following_url": "https://api.github.com/users/bglearning/following{/other_user}",
"gists_url": "https://api.github.com/users/bglearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bglearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bglearning/subscriptions",
"organizations_url": "https://api.github.com/users/bglearning/orgs",
"repos_url": "https://api.github.com/users/bglearning/repos",
"events_url": "https://api.github.com/users/bglearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/bglearning/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is good for merge, no? @julien-c ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=h1) Report\n> Merging [#3948](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b75aa9fa55bee577e2c7403301ed31103125a35&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3948 +/- ##\n==========================================\n- Coverage 78.39% 78.38% -0.02% \n==========================================\n Files 120 120 \n Lines 19925 19925 \n==========================================\n- Hits 15620 15618 -2 \n- Misses 4305 4307 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <100.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=footer). Last update [7b75aa9...99848e8](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @bglearning!"
] | 1,587 | 1,590 | 1,590 | CONTRIBUTOR | null | Add Type Hints to methods in `modeling_utils.py`
Note: The coverage isn't 100%. Mostly skipped internal methods (and some I wasn't sure of). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3948",
"html_url": "https://github.com/huggingface/transformers/pull/3948",
"diff_url": "https://github.com/huggingface/transformers/pull/3948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3948.patch",
"merged_at": 1590189023000
} |
https://api.github.com/repos/huggingface/transformers/issues/3947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3947/comments | https://api.github.com/repos/huggingface/transformers/issues/3947/events | https://github.com/huggingface/transformers/issues/3947 | 606,424,118 | MDU6SXNzdWU2MDY0MjQxMTg= | 3,947 | Many tests fails with PyTorch 1.5.0 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,587 | 1,588 | 1,588 | MEMBER | null | Several (56) tests fail on the newly released PyTorch 1.5.0. This is because the normal distribution can no longer accept a standard deviation of 0.
These errors should not reflect real-life usage, and, therefore, should not impact user experience too much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3946/comments | https://api.github.com/repos/huggingface/transformers/issues/3946/events | https://github.com/huggingface/transformers/issues/3946 | 606,360,701 | MDU6SXNzdWU2MDYzNjA3MDE= | 3,946 | ImportError: cannot import name 'DefaultDataCollator' | {
"login": "shngt",
"id": 20009551,
"node_id": "MDQ6VXNlcjIwMDA5NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shngt",
"html_url": "https://github.com/shngt",
"followers_url": "https://api.github.com/users/shngt/followers",
"following_url": "https://api.github.com/users/shngt/following{/other_user}",
"gists_url": "https://api.github.com/users/shngt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shngt/subscriptions",
"organizations_url": "https://api.github.com/users/shngt/orgs",
"repos_url": "https://api.github.com/users/shngt/repos",
"events_url": "https://api.github.com/users/shngt/events{/privacy}",
"received_events_url": "https://api.github.com/users/shngt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Try reinstalling using \r\n```pip install git+https://github.com/huggingface/transformers```",
"That didn't work, sorry",
"Try uninstalling before or starting again from a clean venv",
"Have the same issue, did force install; uninstall & install, same problem. If I even search the https://github.com/huggingface/transformers repo I dont see any DefaultDataCollator definitions.",
"It looks like this PR https://github.com/huggingface/transformers/pull/5015 removed DefaultDataCollator .... and this is backward compat issue\r\n",
"[This HuggingFace Tutorial](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section3_tf.ipynb) has this error.",
"It looks like the way to fix this is to now use `from transformers.data.data_collator import tf_default_data_collator`. Is there any way to update this in the tutorial?",
"I got an ImportError using `from transformers.data.data_collator import tf_default_data_collator`.\r\nTo me, the problem seems to happen only when TPUs are turned on. \r\nWhen I try to run on CPU or GPU, I can access and use `DefaultDataCollator` without any problem, but TPUs always run to this error.",
"you can open transformers/data/data_collator.py,you can find 'tf_default_data_collator' is not exsit,then you can find function 'default_data_collator'\r\n\r\nso you can replace by this code:\r\n\r\n```python\r\nfrom transformers.data.data_collator import default_data_collator\r\n```\r\n\r\n"
] | 1,587 | 1,660 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Using "from transformers import DefaultDataCollator" raises an ImportError on my system. Unsure if this is really a bug or if I'm doing something stupid. Any help would be appreciated.
Model I am using (Bert, XLNet ...): N/A
Language I am using the model on (English, Chinese ...): N/A
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. from transformers import DefaultDataCollator
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
For the import to go through cleanly.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0, No
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: -
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3946/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3945/comments | https://api.github.com/repos/huggingface/transformers/issues/3945/events | https://github.com/huggingface/transformers/issues/3945 | 606,351,639 | MDU6SXNzdWU2MDYzNTE2Mzk= | 3,945 | BertConfig.to_json_file('config.json') saves "num_labels" as "_num_labels" on Google Colab | {
"login": "gontcharovd",
"id": 49554133,
"node_id": "MDQ6VXNlcjQ5NTU0MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49554133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gontcharovd",
"html_url": "https://github.com/gontcharovd",
"followers_url": "https://api.github.com/users/gontcharovd/followers",
"following_url": "https://api.github.com/users/gontcharovd/following{/other_user}",
"gists_url": "https://api.github.com/users/gontcharovd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gontcharovd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gontcharovd/subscriptions",
"organizations_url": "https://api.github.com/users/gontcharovd/orgs",
"repos_url": "https://api.github.com/users/gontcharovd/repos",
"events_url": "https://api.github.com/users/gontcharovd/events{/privacy}",
"received_events_url": "https://api.github.com/users/gontcharovd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello, I believe there's a mismatch between the versions on your computer and on Google colab. When running on master, saving the file as you did:\r\n\r\n```py\r\nfrom transformers import BertConfig\r\n\r\nNUM_LABELS = 3\r\nconfig = BertConfig.from_pretrained('bert-base-german-cased')\r\nconfig.num_labels = NUM_LABELS\r\nconfig.to_json_file('config.json')\r\n```\r\n\r\nwhen reloading it in the same environment and printing the number of labels:\r\n```py\r\nfrom transformers import BertConfig\r\n\r\nconfig = BertConfig.from_pretrained('config.json')\r\nprint(config.num_labels) # 3\r\n\r\nconfig = BertConfig.from_json_file(\"config.json\")\r\nprint(config.num_labels) # 3\r\n```\r\n\r\nThis happens since https://github.com/huggingface/transformers/pull/3147, which is necessary in order for the correct `id2label` to be instantiated. Please let me know if updating the version on your local machine does not solve the problem.",
"Hello, the transformers version on my computer was 2.1.1 and on Google Colab 2.8.0. Upgrading to the newest version solved the issue for me. Thanks! "
] | 1,587 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
There is a problem with config.to_json_file() on Google Colab.
## Information
The "num_labels" key pair is saved as "_num_labels" key in the config.json file produced by BertConfig.to_json_file('config.json').
When this file is subsequently read with BertConfig.from_json_file('config.json') the "_num_labels" is loaded along with a "num_labels" key with default value 2.
This results in an improper configuration of a model TFBertForSequenceClassification.from_pretrained().
Model I am using (Bert, XLNet ...): TFBertForSequenceClassification
Language I am using the model on (English, Chinese ...): bert-base-german-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
1. modify the number of labels of the default bert-base-german-cased config and save to file
``` python
from transformers import BertConfig, TFBertForSequenceClassification
NUM_LABELS = 3
config = BertConfig.from_pretrained('bert-base-german-cased')
config.num_labels = NUM_LABELS
config.to_json_file('config.json')
```
This is what config.json looks like, notice the "_num_labels" key:
```json
{
"_num_labels": 3,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000
}
```
2. read the saved config
``` python
config = BertConfig.from_json_file('config.json')
```
this is what the loaded config.json looks like:
```json
{
"_num_labels": 3,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000
}
```
3. the loaded config improperly configures the number of labels of a model:
```python
model = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config
)
```
```
>>> model = TFBertForSequenceClassification.from_pretrained(
... './model/bert_de/weights.h5', config=config
... )
2020-04-24 16:20:44.590400: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 274, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 234, in load_weights
return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1220, in load_weights
f, self.layers, skip_mismatch=skip_mismatch)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 777, in load_weights_from_hdf5_group_by_name
str(weight_values[i].shape) + '.')
ValueError: Layer #2 (named "classifier"), weight <tf.Variable 'tf_bert_for_sequence_classification/classifier/kernel:0' shape=(768, 2) dtype=float32, numpy=
array([[ 0.00399719, 0.01253725],
[ 0.00453608, -0.00098394],
[ 0.01605183, -0.02316079],
...,
[-0.0174976 , 0.00032987],
[-0.01292989, 0.00867058],
[-0.02766422, -0.00422863]], dtype=float32)> has shape (768, 2), but the saved weight has shape (768, 3).
```
I load the model with saved weights after transfer learning starting from bert-base-german-cased.
The weights were saved by ModelCheckpoint:
```python
mc = ModelCheckpoint('./model/bert_de/weights.h5', monitor='val_loss', mode='min',
verbose=1, save_best_only=True, save_weights_only=True)
```
## Expected behavior
The correct number of labels = 3 should be read from the config.json file and not the default 2.
```python
model_de = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config_de
)
```
A hack for this problem is to specify the num_labels again after reading config.json:
```python
config = BertConfig.from_json_file('config.json')
config.num_labels = NUM_LABELS
mode = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config
)
```
## Environment info
This happens on Google Colab. On my local machine I don't get the key "_num_labels". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3945/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3944/comments | https://api.github.com/repos/huggingface/transformers/issues/3944/events | https://github.com/huggingface/transformers/issues/3944 | 606,307,128 | MDU6SXNzdWU2MDYzMDcxMjg= | 3,944 | Trainer: distributed eval | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi Julian, would you please provide more information on this issue?",
"In our Trainer, in `torch.distributed` mode, eval/predict are currently running on a single node, even though there's no fundamental reason for this to be the case.\r\n\r\nSupporting would probably require writing a new instance of torch's [DistributedSampler](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py) maybe named `SequentialDistributedSampler` and then re-aggregating the inference results on a single node to perform the metrics computation.",
"Hey @julien-c! I'm new here and would like to work on this. Doesn't seem like anyone else is currently working on this.",
"Go for it @abhavk!",
"Hey @julien-c ! I've given some thought to what needs to be done after looking at the documentation and I see two ways to implement this - \r\n\r\n1. Use an argument is_distributed in eval and predict functions and set that to true while calling from training loop in distributed setting. This does not mandate changes to the training loop to preserve current functionality (but the loop will need to be changed for actual implementation of distributed evaluation in the training loop).\r\n\r\n2. Do the is_distributed check within the eval and predict functions. This will mandate changes to the training loop in order to preserve current functionality. \r\n\r\nI'm personally leaning towards option 2 because that allows no explicit checks for distributed setting in the training loop - which seems cleaner, but there are still questions around how the training loop should change. \r\n\r\nIn either case, for the rest of the implementation my plan is to:\r\n\r\n1. (as you have suggested) implement a slightly different version of the DistributedSampler (which does not repeat any indices while sampling over the dataset) and use that in the eval/predict functions. \r\n\r\n2. After that call the prediction loop to calculate the outputs and use a function like dist.gather on the rank 0 node to collect outputs. I feel like this aggregating may need to be called within the prediction loop in order to allow for computation of custom metrics (which probably needs to be done on a single node), but correct me if I'm wrong here. \r\n\r\nThis is based on my current understanding so would be happy to reconsider the approach or implementation details. Thanks!",
"@abhavk I've written distributed eval in my own code and AFAIK it is not required to modify the sampler. The point of a distributed sampler in DDP is that each process (sampler + its dataloader) only receives a portion of the data. It is not the case that all processes process the same data!\r\n\r\nAlso see [the documentation](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler):\r\n\r\n> Sampler that restricts data loading to a subset of the dataset.\r\n> \r\n> It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.\r\n\r\nThat should be fine. What should be done, though is that to average well, all results for a given epoch (or step) should be saved (in memory) by each individual process. then, at the end of evaluation, all results should be gathered to a single process which can calculate the metrics over all results. So basically: you do the forward pass distributed, keep all predictions, and to average them you gather them on one GPU to do averaging/other computations. \r\n\r\nSo I think that the second point is indeed correct, but the first one is not necessary AFAIK.\r\n\r\nIf you need help or want to discuss it, let me know and mention me in this topic!",
"@BramVanroy Thanks for that input. I have gone through the documentation and implementation for DistributedSampler, and from [the code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py) I can see that when the number of datapoints in the dataset is not divisible by the number of processes, it is adding extra samples by repeating the first few indices in order to make it evenly divisible. So for example a dataset with 5 datapoints and being distributed across 2 processes, will actually have 6 samples and that will be done by repeating the first sample (if the data is not shuffled, otherwise a different random sample) in both processes.\r\n\r\nWhile this may be fine for training, I don't think this is desirable/okay during the evaluation/prediction phase, but do let me know if that understanding is not correct (Tagging @julien-c here too). That is primarily why I was suggesting writing a different implementation for the Distributed Sampler, where even though some nodes might have one fewer datum, there is no repetition. ",
"Actually now that I think about it, even if it is not ideal to repeat eval samples, the difference may not be worth writing a different sampler implementation, since if someone need to use distributed processing, it may be safe to assume that they have a fairly large eval dataset where the number of samples is >> number of processes. ",
"> Actually now that I think about it, even if it is not ideal to repeat eval samples, the difference may not be worth writing a different sampler implementation, since if someone need to use distributed processing, it may be safe to assume that they have a fairly large eval dataset where the number of samples is >> number of processes.\r\n\r\nExactly. In addition, the DataLoaders's \"drop_last\" might be of help here, though I am not entirely sure how it would integrate with a distributed sampler.",
"I don't really agree with this. For evaluation results to be comparable you pretty much need to have the inference results for exactly all the samples.\r\n\r\nAlso we use the same code for eval and for prediction (the loop is called `_prediction_loop()`), and in the case of prediction, e.g. when predicting on the GLUE hidden test dataset you need to output all the predictions in order.\r\n\r\nYou probably could do this using torch's DistributedSampler with some hacks (adding dummy samples to be divisible by number of nodes and dropping them afterwards) and knowledge of its internal implementation (how to reorder samples at the end) but it's probably way more explicit to just implement a dedicated one. (torch's DistributedSampler is very few lines of code anyways)\r\n\r\nI might take a stab at this today and will ping you here.\r\n\r\n**Update**: edited slightly to separate eval and prediction cases",
"Yes, to be able to compare studies, that is true. For production-oriented systems and large eval datasets that does not matter as much.\r\n\r\nIdea, subclass the DistributedSampler and add a property `self.dummy_idxs: List[List[int]]`. The __iter__ method of DistributedSampler can stay the same with the only addition that we keep track of the indices that are larger than the length of self.dataset for each batch. self.dummy_idxs then contains a list of a list for each batch that contains the position in that batch where an item was a dummy item. \r\n\r\nI don't have time to work on this, though, but happy to think along.",
"Sure @julien-c. I've taken a first stab and this is really quite an easy fix - `total_size` in the `__init__` function just changes to the length of the dataset and the below piece of code within the `iter` function should be removed (with minor changes to the `num_samples` variable) - \r\n` # add extra samples to make it evenly divisible`\r\n`indices += indices[:(self.total_size - len(indices))]`\r\n`assert len(indices) == self.total_size` \r\n\r\nIf this solution looks fine, can you tell me the proper way to add this to the repository when making the update - as in which folder would this piece go in (is it /utils)? ",
"I took a stab at it in the PR above but would love if you could review and/or test it.",
"Added my comments @julien-c ",
"Closed by #4243"
] | 1,587 | 1,590 | 1,590 | MEMBER | null | Tagging this as a Good First issue if anyone's interested. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3944/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3943/comments | https://api.github.com/repos/huggingface/transformers/issues/3943/events | https://github.com/huggingface/transformers/issues/3943 | 606,270,838 | MDU6SXNzdWU2MDYyNzA4Mzg= | 3,943 | How to input hidden state vectors from GPT2Model directly into mc_head of the GPT2DoubleHeads Model? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | NONE | null | Hello,
I want to perform the following task:
**1.** _Feed in a text sequence to a GPT2Model (the GPT-2 model without output head);_
**2.** _Extract the hidden state vectors of the text sequence that are generated at each layer of the GPT2Model;_
**3.** _Input the hidden state vectors from each layer of the GPT2Model directly into the mc_head of the GPT2DoubleHeadsModel, and calculate the **mc_loss** that results from inputting the hidden state vectors._
I know that the code below can be used to input hidden state vectors into each individual layer of a HuggingFace GPT-2 model, but I am not sure if similar task can be done on the mc_head of the GPT2DoubleHeadsModel:
```python
model.transformer.h[i](embed)
```
My questions is:
Is there anyway that I can input the hidden state vectors from each layer of the GPT2Model directly into the multiple choice head of the GPT2DoubleHeadsModel, and extract the **mc_loss** that results from it (i.e. is there any way that I can perform the task **3**)?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3942/comments | https://api.github.com/repos/huggingface/transformers/issues/3942/events | https://github.com/huggingface/transformers/issues/3942 | 606,192,832 | MDU6SXNzdWU2MDYxOTI4MzI= | 3,942 | XLNetLMHeadModel: target mapping with num_predict > 1 and labels not working | {
"login": "hannodje",
"id": 26486603,
"node_id": "MDQ6VXNlcjI2NDg2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/26486603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannodje",
"html_url": "https://github.com/hannodje",
"followers_url": "https://api.github.com/users/hannodje/followers",
"following_url": "https://api.github.com/users/hannodje/following{/other_user}",
"gists_url": "https://api.github.com/users/hannodje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannodje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannodje/subscriptions",
"organizations_url": "https://api.github.com/users/hannodje/orgs",
"repos_url": "https://api.github.com/users/hannodje/repos",
"events_url": "https://api.github.com/users/hannodje/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannodje/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @hannodje,\r\n\r\nsorry to answer so late. Could you provide a complete code sample that can reproduce the error? One in which you set define all the variables `input`, `attention_mask`, `perm_mask`, `target_mapping`, `sequence_ids` and `target` and which defines which XLNetLMHeadModel you use. Ideally, I can copy / paste the code sample in a script without having to add code of my own to reproduce the error :-) It's kinda hard to reproduce the error otherwise. Thanks!"
] | 1,587 | 1,591 | 1,591 | NONE | null | Hi,
While trying to fine tune the XLNetLMHeadModel, I ran into the following error message:
` File "[...]/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "[...]/lib/python3.8/site-packages/transformers/modeling_xlnet.py", line 1062, in forward
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
`
while calling the forward function like this:
outputs = xlnet(input, # shape = (8, 50)
attention_mask=attention_mask, # shape = (8, 50)
perm_mask=perm_mask, # shape = (8, 50, 50)
target_mapping=target_mapping, # shape = (8, 2, 50)
token_type_ids=sequence_ids, # shape = (8, 50)
labels=target) # shape = (8, 2)
According to the [docs](https://huggingface.co/transformers/model_doc/xlnet.html#transformers.XLNetLMHeadModel) the labels shape is expected to be (batch_size, num_predict). However using this shape causes the above mentioned error.
The error does not appear if we set num_predict to 1 and set labels.shape = torch.size([8])
The error also does not appear if we omit the labels and manually compute the loss | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3941/comments | https://api.github.com/repos/huggingface/transformers/issues/3941/events | https://github.com/huggingface/transformers/issues/3941 | 606,093,836 | MDU6SXNzdWU2MDYwOTM4MzY= | 3,941 | Decoding output sequences from TFGPT2Model | {
"login": "xeb",
"id": 7634,
"node_id": "MDQ6VXNlcjc2MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xeb",
"html_url": "https://github.com/xeb",
"followers_url": "https://api.github.com/users/xeb/followers",
"following_url": "https://api.github.com/users/xeb/following{/other_user}",
"gists_url": "https://api.github.com/users/xeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xeb/subscriptions",
"organizations_url": "https://api.github.com/users/xeb/orgs",
"repos_url": "https://api.github.com/users/xeb/repos",
"events_url": "https://api.github.com/users/xeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/xeb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @xeb, sorry to reply so late. I think you are using the wrong model. Instead of using `TFGPT2Model` you should use GPT2 with a LM Head on top `GPT2LMHeadModel`. The code should work then :-) ",
"@patrickvonplaten could you please specify why he should GPT2LMHeadModel model instead of TFGPT2 ? I think a little explanation will help others who encounter the same error as me. Thank you. ",
"Head probably meant `TFGPT2LMHeadModel` 😅 ",
"Hahaha, you are right but I think he pointed in right direction as far as I know, I can be wrong as well because I am a noob in this field. But he also did not give justification. "
] | 1,587 | 1,692 | 1,589 | CONTRIBUTOR | null | # ❓ Questions & Help
I asked this on SO without any luck.
https://stackoverflow.com/questions/61222878/how-can-you-decode-output-sequences-from-tfgpt2model
## Details
I'm trying to get generated text from the TFGPT2Model in the Transformers library. I can see the output tensor, but I'm not able to decode it. Is the tokenizer not compatible with the TF model for decoding?
```
import tensorflow as tf
from transformers import (
TFGPT2Model,
GPT2Tokenizer,
GPT2Config,
)
model_name = "gpt2-medium"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = TFGPT2Model.from_pretrained(model_name, config=config)
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute",
add_special_tokens=True))[None, :] # Batch size 1
outputs = model(input_ids)
print(outputs[0])
result = tokenizer.decode(outputs[0])
print(result)
```
The output is:
```
$ python run_tf_gpt2.py
2020-04-16 23:43:11.753181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-04-16 23:43:11.777487: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-04-16 23:43:27.617982: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-04-16 23:43:27.693316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-16 23:43:27.824075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA n
ode, so returning NUMA node zero
...
...
2020-04-16 23:43:38.149860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10565 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:25:00.0, compute capability: 3.7)
2020-04-16 23:43:38.150217: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-16 23:43:38.150913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10565 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:26:00.0, compute capability: 3.7)
2020-04-16 23:43:44.438587: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
tf.Tensor(
[[[ 0.671073 0.60760975 -0.10744217 ... -0.51132596 -0.3369941
0.23458953]
[ 0.6403012 0.00396247 0.7443729 ... 0.2058892 -0.43869907
0.2180479 ]
[ 0.5131284 -0.35192695 0.12285632 ... -0.30060387 -1.0279727
0.13515341]
[ 0.3083361 -0.05588413 1.0543617 ... -0.11589152 -1.0487361
0.05204075]
[ 0.70787597 -0.40516227 0.4160383 ... 0.44217822 -0.34975922
0.02535546]
[-0.03940453 -0.1243843 0.40204537 ... 0.04586177 -0.48230025
0.5768887 ]]], shape=(1, 6, 1024), dtype=float32)
Traceback (most recent call last):
File "run_tf_gpt2.py", line 19, in <module>
result = tokenizer.decode(outputs[0])
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1605, in decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1575, in convert_ids_to_tokens
index = int(index)
File "/home/.../venv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 853, in __int__
return int(self._numpy())
TypeError: only size-1 arrays can be converted to Python scalars
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3940/comments | https://api.github.com/repos/huggingface/transformers/issues/3940/events | https://github.com/huggingface/transformers/pull/3940 | 606,065,407 | MDExOlB1bGxSZXF1ZXN0NDA4MzQ5NTU5 | 3,940 | fix resize_token_embeddings to accept padding_idx for xlm-roberta models | {
"login": "Soonhwan-Kwon",
"id": 7395166,
"node_id": "MDQ6VXNlcjczOTUxNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7395166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Soonhwan-Kwon",
"html_url": "https://github.com/Soonhwan-Kwon",
"followers_url": "https://api.github.com/users/Soonhwan-Kwon/followers",
"following_url": "https://api.github.com/users/Soonhwan-Kwon/following{/other_user}",
"gists_url": "https://api.github.com/users/Soonhwan-Kwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Soonhwan-Kwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Soonhwan-Kwon/subscriptions",
"organizations_url": "https://api.github.com/users/Soonhwan-Kwon/orgs",
"repos_url": "https://api.github.com/users/Soonhwan-Kwon/repos",
"events_url": "https://api.github.com/users/Soonhwan-Kwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/Soonhwan-Kwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | When resize_token_embedding there is no option for padding_idx
and it leads to the wrong embedding for xlm-roberta models
which needs embedding to have padding_idx as 1 not 0.
I fixed issue by adding option for padding_idx and considered padding_idx as None (which is 0 as default) for compatibility for other types of transformers models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3940",
"html_url": "https://github.com/huggingface/transformers/pull/3940",
"diff_url": "https://github.com/huggingface/transformers/pull/3940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3940.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3939/comments | https://api.github.com/repos/huggingface/transformers/issues/3939/events | https://github.com/huggingface/transformers/pull/3939 | 606,038,176 | MDExOlB1bGxSZXF1ZXN0NDA4MzI3OTk1 | 3,939 | Continue training args and tqdm in notebooks | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Updates committed",
"Thanks!"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Added a little more description to the metadata of training args for new scripts.
`--overwrite_output_dir` can be used to continue training from checkpoint if `--output_dir` points to a directory already having checkpoints.
This can help reduce the confusion when migrating from older script.
tqdm in colab prints every step in new line.
so updated to
`from tqdm.auto import tqdm` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3939",
"html_url": "https://github.com/huggingface/transformers/pull/3939",
"diff_url": "https://github.com/huggingface/transformers/pull/3939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3939.patch",
"merged_at": 1588299249000
} |
https://api.github.com/repos/huggingface/transformers/issues/3938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3938/comments | https://api.github.com/repos/huggingface/transformers/issues/3938/events | https://github.com/huggingface/transformers/issues/3938 | 606,013,606 | MDU6SXNzdWU2MDYwMTM2MDY= | 3,938 | [Benchmark] Parameter settings of XLM-R on NER tasks | {
"login": "lixin4ever",
"id": 18526640,
"node_id": "MDQ6VXNlcjE4NTI2NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/18526640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lixin4ever",
"html_url": "https://github.com/lixin4ever",
"followers_url": "https://api.github.com/users/lixin4ever/followers",
"following_url": "https://api.github.com/users/lixin4ever/following{/other_user}",
"gists_url": "https://api.github.com/users/lixin4ever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lixin4ever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lixin4ever/subscriptions",
"organizations_url": "https://api.github.com/users/lixin4ever/orgs",
"repos_url": "https://api.github.com/users/lixin4ever/repos",
"events_url": "https://api.github.com/users/lixin4ever/events{/privacy}",
"received_events_url": "https://api.github.com/users/lixin4ever/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
`XLM-RoBerta` on CoNLL 03 English NER dataset
## Set-up
`model_name_path`: xlm-roberta-base
`max_steps`: 3000 (roughly 6 epochs)
`warmup_steps`: 300
`save_steps`: 200
`learning_rate`: 5e-5
`per_gpu_train_batch_size`: 8
`n_gpu`: 3 (Titan X)
## Results
91.017 (averaged results of five runs)
## Issues
As reported in original [paper](https://arxiv.org/pdf/1911.02116.pdf), XLM-R can achieve 92.25 on this English NER dataset. Since they do not share any parameter settings for fine-tuning, I have to adopt the above ones and the F1 score is about 91.017. Has anybody ever obtained similar results to those in this paper with appropriate settings?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3937/comments | https://api.github.com/repos/huggingface/transformers/issues/3937/events | https://github.com/huggingface/transformers/issues/3937 | 605,952,840 | MDU6SXNzdWU2MDU5NTI4NDA= | 3,937 | What should I do if I want to change the padding idx in pytorch bert(Huggingface)? | {
"login": "R-craft",
"id": 43983874,
"node_id": "MDQ6VXNlcjQzOTgzODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/43983874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R-craft",
"html_url": "https://github.com/R-craft",
"followers_url": "https://api.github.com/users/R-craft/followers",
"following_url": "https://api.github.com/users/R-craft/following{/other_user}",
"gists_url": "https://api.github.com/users/R-craft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R-craft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R-craft/subscriptions",
"organizations_url": "https://api.github.com/users/R-craft/orgs",
"repos_url": "https://api.github.com/users/R-craft/repos",
"events_url": "https://api.github.com/users/R-craft/events{/privacy}",
"received_events_url": "https://api.github.com/users/R-craft/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @R-craft, \r\n\r\nThat's actually a bug. Thanks for spotting it :-) Will open a PR."
] | 1,587 | 1,588 | 1,588 | NONE | null |
I find ,in RobertaModel class, the padding idx is limited to 1(is it true?),which is different from my tokenizer and data.So what should I do to change or just apply the model structure...
###
class RobertaEmbeddings(BertEmbeddings):
"""
Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
"""
def __init__(self, config):
super().__init__(config)
self.padding_idx = 1
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
if position_ids is None:
if input_ids is not None:
# Create the position ids from the input token ids. Any padded tokens remain padded.
position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)
else:
position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)
return super().forward(
input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3937/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3936/comments | https://api.github.com/repos/huggingface/transformers/issues/3936/events | https://github.com/huggingface/transformers/issues/3936 | 605,950,237 | MDU6SXNzdWU2MDU5NTAyMzc= | 3,936 | Pytorch 1.5 DataParallel | {
"login": "Rizhiy",
"id": 5617397,
"node_id": "MDQ6VXNlcjU2MTczOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5617397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rizhiy",
"html_url": "https://github.com/Rizhiy",
"followers_url": "https://api.github.com/users/Rizhiy/followers",
"following_url": "https://api.github.com/users/Rizhiy/following{/other_user}",
"gists_url": "https://api.github.com/users/Rizhiy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rizhiy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rizhiy/subscriptions",
"organizations_url": "https://api.github.com/users/Rizhiy/orgs",
"repos_url": "https://api.github.com/users/Rizhiy/repos",
"events_url": "https://api.github.com/users/Rizhiy/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rizhiy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm experiencing the same problem running `transformers/examples/run_language_modeling.py` with RoBERTa. Works well with PyTorch 1.4.0 tho.",
"Also happens with RoBERTa, but only in distributed mode (only tested with DataParallel for now)",
"@Rizhiy, do you mind putting a code example? I can't reproduce on `master` by doing an inference through the model. Thanks.",
"@LysandreJik I will try to put one together but it's a bit weird. I only observed it when I use a transformer model via lightning in DataParallel mode so far",
"Ah, got it\r\n\r\n```python\r\nimport transformers\r\nimport torch\r\n\r\nm = transformers.AutoModel.from_pretrained(\"roberta-base\")\r\nm.to(\"cuda:0\")\r\nk = torch.nn.DataParallel(m, device_ids=[0,1])\r\nk.forward(m.dummy_inputs['input_ids'])\r\n```\r\n\r\ngives\r\n\r\n```text\r\nStopIteration: Caught StopIteration in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"<snip>/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"<snip>/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/<snip>/lib/python3.8/site-packages/transformers/modeling_bert.py\", line 707, in forward\r\n attention_mask, input_shape, self.device\r\n File \"<snip>/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 113, in device\r\n return next(self.parameters()).device\r\nStopIteration\r\n```\r\n\r\nUsing torch 1.5.0 and something like yesterday's transformers master",
"The same for me and Bert\r\n```\r\ntransformers - 2.3.0\r\npytorch - 1.5.0\r\n```",
"See also https://github.com/PyTorchLightning/pytorch-lightning/issues/1649",
"Hello Everybody, do we have an update on this?. Today i managed to gather some more data to train a RoBERTa Model from scratch, i have been running experiementes in Pytorch 1.4, and i found this bug today that updated to Pytorch 1.5.",
"Same problem here, running BERT. \r\n\r\n```\r\ntorch==1.5.0\r\ntransformers==2.8.0\r\n```\r\nI'm running on GPUs, using `export CUDA_VISIBLE_DEVICES=5,6,7` before running (I have 8 1080TIs on this server).\r\n\r\n```run_language_modeling.py --output_dir=models --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=Vol45.sample --mlm --save_steps-2000 --line_by_line --per_gpu_train_batch_size=8```\r\n\r\nVol45.sample is a .txt with one doc per line\r\n\r\nEDIT: It seems to work if I downgrade pytorch to 1.4",
"Same here.\r\nThis might have to do with the first issue listed under Known Issues in the [pytorch changelog](https://github.com/pytorch/pytorch/releases) of version 1.5, i.e. the recent change in `torch.nn.parallel.replicate`",
"> Same problem here, running BERT.\r\n> \r\n> ```\r\n> torch==1.5.0\r\n> transformers==2.8.0\r\n> ```\r\n> \r\n> I'm running on GPUs, using `export CUDA_VISIBLE_DEVICES=5,6,7` before running (I have 8 1080TIs on this server).\r\n> \r\n> `run_language_modeling.py --output_dir=models --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=Vol45.sample --mlm --save_steps-2000 --line_by_line --per_gpu_train_batch_size=8`\r\n> \r\n> Vol45.sample is a .txt with one doc per line\r\n> \r\n> EDIT: It seems to work if I downgrade pytorch to 1.4\r\n\r\nThanks. It also works for me!\r\n\r\n```\r\ntorch==1.4.0\r\ntransformers==2.8.0\r\n```\r\n",
"The same issue: #4189 ",
"Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?",
"> Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?\r\n\r\nSure, please use the following code to reproduce the error: \r\n\r\n> import torch, transformers\r\n> model = transformers.AutoModel.from_pretrained(\"bert-base-multilingual-cased\")\r\n> model = torch.nn.DataParallel(model)\r\n> model = model.cuda()\r\n> input = torch.ones([16, 10], dtype=torch.long)\r\n> input = input.cuda()\r\n> model(input)",
"> Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?\r\n\r\nI was using the run_language_modeling.py script, which AFAIK uses torch.nn.DataParallel.",
"This seems to be due to https://github.com/pytorch/pytorch/pull/33907\r\n\r\nStill looking for the most correct fix on our side.",
"Worked for me when downgraded to\r\ntorch==1.4.0",
"Can you guys take a look at https://github.com/huggingface/transformers/issues/4657 and suggest what environment I should use. I've tried several with no luck.",
"Can you install the repo from source and try again? There have been some issues with PyTorch upstream that Julien addressed here: #4300. So you can try with the latest master branch.",
"Can confirm that installing from source (2.10) solves the issue.",
"Hello!\r\n\r\nJust for the record, this seems to be solved with the latest release of transformers (3.0.1 and pytorch 1.5.1, cuda 10.1). At least the provided MWE does not fail.\r\n\r\nBest,\r\n\r\n",
"> Hello!\r\n> \r\n> Just for the record, this seems to be solved with the latest release of transformers (3.0.1 and pytorch 1.5.1, cuda 10.1). At least the provided MWE does not fail.\r\n> \r\n> Best,\r\n\r\nAs per the previous comments: this was probably already fixed in 2.10.",
"Seems like there's an issue with DataParallel since 1.5, still no fix tho",
"> Seems like there's an issue with DataParallel since 1.5, still no fix tho\r\n\r\nYes,I use torch==1.80,encounter this just now.",
"> > Seems like there's an issue with DataParallel since 1.5, still no fix tho\r\n> \r\n> Yes,I use torch==1.80,encounter this just now.\r\n\r\nFor now you can only down-grade to 1.4.0 to escape the bug (It's been almost a year and the bug is still unfixed). Some people edit model codes (avoiding `next(self.parameters())`) for same purpose.",
"This issue should have been fixed in https://github.com/huggingface/transformers/pull/4300, at least available in v3.0.1.\r\n\r\nCould you open a new issue with the specific error and link it here so that we may see what's going wrong? Thank you."
] | 1,587 | 1,615 | 1,590 | NONE | null | # 🐛 Bug
## Information
Can't run forward in PyTorch 1.5.0, works fine in 1.4.0
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
Transformer + custom head + custom losses + differential learning rates, I don't think it matters.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Custom news classification
## To reproduce
Steps to reproduce the behavior:
1. Install PyTorch 1.5.0
2. Run forward on xlnet
3.
```
File "transformers/modeling_xlnet.py", line 761, in forward
dtype_float = next(self.parameters()).dtype
StopIteration
```
## Expected behavior
Runs forward
## Environment info
- `transformers` version: 2.8.0
- Platform: Ubuntu 18.04
- Python version: Anaconda 3.7
- PyTorch version (GPU?): 1.5, Yes
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3936/reactions",
"total_count": 19,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 18
} | https://api.github.com/repos/huggingface/transformers/issues/3936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3935/comments | https://api.github.com/repos/huggingface/transformers/issues/3935/events | https://github.com/huggingface/transformers/issues/3935 | 605,938,690 | MDU6SXNzdWU2MDU5Mzg2OTA= | 3,935 | How to feeding hidden state vectors from one transformer directly into a layer of different transformer | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you ever find the answer? "
] | 1,587 | 1,591 | 1,587 | NONE | null | Hello,
I want to perform the following task:
**1.** _Feed in a text sequence to a GPT2Model (the GPT-2 model without output head);_
**2.** _Extract the hidden state vectors of the text sequence that are generated at each layer of the GPT2Model;_
**3.** _Feed the hidden state vectors from each layer of the GPT2Model as an input to the GPT2DoubleHeadsModel (assuming that the n_embd of the GPT2Model and the GPT2DoubleHeadsModel are equal), and calculate the **mc_loss** that results from inputting the hidden state vectors. My GPT2DoubleHeadsModel would only be consisted of (1 layer) + (the output heads)._
I know that the code below can be used to feed in hidden state vectors as inputs to each individual layer of a HuggingFace GPT-2 model:
```python
model.transformer.h[i](embed)
```
But the code above will only give me the output of an individual layer, so it wouldn't give me the mc_loss that I am looking for (since to calculating the mc_loss from the hidden state vectors, I would first need to feed in the hidden state vectors into the 1st layer of the GPT2DoubleHeadsModel, and somehow feed in the output of the layer as an input to the mc_head (multiple choice output head) of the GPT2DoubleHeadsModel)
My questions are:
1. Is there any way to perform the task 3 with the Hugging Face GPT-2 models?
2. Is there anyway that I can access the GPT2DoubleHeadsModel's mc_head (multiple choice head), similar to the code that I wrote above?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3935/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3934/comments | https://api.github.com/repos/huggingface/transformers/issues/3934/events | https://github.com/huggingface/transformers/pull/3934 | 605,930,385 | MDExOlB1bGxSZXF1ZXN0NDA4MjQ1MDgz | 3,934 | [qol] example scripts: parse args from .args file or JSON | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for adding this! Could you also integrate it into the `run_ner` example 🤔",
"Yes @stefan-it I'll add it to all (Trainer-updated) scripts.\r\n\r\nDo you have a preference between the JSON approach and the args one? Or should we do both?",
"Perfect!!! it works like a charm. I think both would be nice.",
"Both would be nice, thanks!",
"Very nice!",
"@julien-c Would it be helpful when I open a PR that adds this to the `run_ner` example, or do you already have this feature on a branch 🤔",
"I do not have a branch that does that currently so feel free to do it (+ also run_language_modeling and run_multiple_choice if you're so inclined :)",
"@julien-c I am a bit late to the party, but I went over it quickly nevertheless. As my comments show, if the only function of `trim_suffix` is to remove the extension or replace it, it might be more readable to simply use the built-in `Path.with_extension` functionality.\r\n\r\nIf you agree I can do a PR for this.\r\n\r\nApart from that: great stuff. especially with a growing number of parameters and options, a config file is really welcome!",
"@BramVanroy Yes, I didn't know this feature, PR is welcome."
] | 1,587 | 1,588 | 1,588 | MEMBER | null | You can either:
- pass the path to a json file as the unique argument
- automatically load args from a .args file that's a sibling of the entrypoint script.
```json
{
"model_name_or_path": "bert-base-cased",
"task_name": "mnli",
"data_dir": "./data/glue/MNLI",
"output_dir": "./models/mnli"
}
```
```
--model_name_or_path distilroberta-base
--task_name mnli
```
I use the .args method myself, but I think both are ok.
Tagging @jplu and @stefan-it who have expressed interest in this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3934/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3934",
"html_url": "https://github.com/huggingface/transformers/pull/3934",
"diff_url": "https://github.com/huggingface/transformers/pull/3934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3934.patch",
"merged_at": 1588300814000
} |
https://api.github.com/repos/huggingface/transformers/issues/3933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3933/comments | https://api.github.com/repos/huggingface/transformers/issues/3933/events | https://github.com/huggingface/transformers/pull/3933 | 605,908,862 | MDExOlB1bGxSZXF1ZXN0NDA4MjI3NzIz | 3,933 | Add ALBERT to the Tensorflow to Pytorch model conversion cli | {
"login": "fgaim",
"id": 4906991,
"node_id": "MDQ6VXNlcjQ5MDY5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4906991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fgaim",
"html_url": "https://github.com/fgaim",
"followers_url": "https://api.github.com/users/fgaim/followers",
"following_url": "https://api.github.com/users/fgaim/following{/other_user}",
"gists_url": "https://api.github.com/users/fgaim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fgaim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fgaim/subscriptions",
"organizations_url": "https://api.github.com/users/fgaim/orgs",
"repos_url": "https://api.github.com/users/fgaim/repos",
"events_url": "https://api.github.com/users/fgaim/events{/privacy}",
"received_events_url": "https://api.github.com/users/fgaim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM!"
] | 1,587 | 1,589 | 1,589 | CONTRIBUTOR | null | This PR adds ALBERT to the `convert` command of `transformers-cli` to allow the conversion of pre-trained models from Tensorflow to Pytorch. The documentation is updated to show how to run the conversion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3933",
"html_url": "https://github.com/huggingface/transformers/pull/3933",
"diff_url": "https://github.com/huggingface/transformers/pull/3933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3933.patch",
"merged_at": 1589217001000
} |
https://api.github.com/repos/huggingface/transformers/issues/3932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3932/comments | https://api.github.com/repos/huggingface/transformers/issues/3932/events | https://github.com/huggingface/transformers/pull/3932 | 605,886,906 | MDExOlB1bGxSZXF1ZXN0NDA4MjA5NzY2 | 3,932 | [ci] Load pretrained models into the default (long-lived) cache | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry I missed this. This is cool!"
] | 1,587 | 1,588 | 1,588 | MEMBER | null | There's an inconsistency right now where:
- we load some models into CACHE_DIR
- and some models in the default cache
- and often, in both for the same models
When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
I'd rather always use the default cache
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3932",
"html_url": "https://github.com/huggingface/transformers/pull/3932",
"diff_url": "https://github.com/huggingface/transformers/pull/3932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3932.patch",
"merged_at": 1588300216000
} |
https://api.github.com/repos/huggingface/transformers/issues/3931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3931/comments | https://api.github.com/repos/huggingface/transformers/issues/3931/events | https://github.com/huggingface/transformers/issues/3931 | 605,795,572 | MDU6SXNzdWU2MDU3OTU1NzI= | 3,931 | Loading a TF pretrained model into BertForSequenceClassification module | {
"login": "dennymarcels",
"id": 12802916,
"node_id": "MDQ6VXNlcjEyODAyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/12802916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dennymarcels",
"html_url": "https://github.com/dennymarcels",
"followers_url": "https://api.github.com/users/dennymarcels/followers",
"following_url": "https://api.github.com/users/dennymarcels/following{/other_user}",
"gists_url": "https://api.github.com/users/dennymarcels/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dennymarcels/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dennymarcels/subscriptions",
"organizations_url": "https://api.github.com/users/dennymarcels/orgs",
"repos_url": "https://api.github.com/users/dennymarcels/repos",
"events_url": "https://api.github.com/users/dennymarcels/events{/privacy}",
"received_events_url": "https://api.github.com/users/dennymarcels/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You can't directly load an official TensorFlow checkpoint in the PyTorch model, you first need to convert it. You can use this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) to convert it to a PyTorch checkpoint.\r\n\r\nIf you want to use our TensorFlow interface in order to do this, you would still need to use this script to convert it to our interface, and then use the `TFBertForSequenceClassification` while specifying the `from_pt` option (as the result would be a PyTorch checkpoint):\r\n\r\n```py\r\nmodel = TFBertForSequenceClassification.from_pretrained(\"directory\", from_pt=True)\r\n```",
" I had understood that parameter `from_tf` would take care of this, did I\nget it wrong? Your suggestion worked though, thank you!\n\nEm qui., 23 de abr. de 2020 às 16:55, Lysandre Debut <\[email protected]> escreveu:\n\n> Hi! You can't directly load an official TensorFlow checkpoint in the\n> PyTorch model, you first need to convert it. You can use this script\n> <https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py>\n> to convert it to a PyTorch checkpoint.\n>\n> If you want to use our TensorFlow interface in order to do this, you would\n> still need to use this script to convert it to our interface, and then use\n> the TFBertForSequenceClassification while specifying the from_pt option\n> (as the result would be a PyTorch checkpoint):\n>\n> model = TFBertForSequenceClassification.from_pretrained(\"directory\", from_pt=True)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3931#issuecomment-618629373>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ADBVWZB5OSLJ2X6WEC63COTROCMKPANCNFSM4MPLTASQ>\n> .\n>\n",
"`from_tf` specified we're loading a TensorFlow checkpoint that is already in the HuggingFace format (TF2), which is usually different than the original implementation (TF1).\r\n\r\nThe original implementation's checkpoints should first be converted in order to be usable.",
"> `from_tf` specified we're loading a TensorFlow checkpoint that is already in the HuggingFace format (TF2), which is usually different than the original implementation (TF1).\r\n> \r\n> The original implementation's checkpoints should first be converted in order to be usable.\r\n\r\nhow can I convert ckpt(TF1) to h5(TF2)?\r\nI only found convert_bert_original_tf_checkpoint_to_pytorch.py",
"You could use the script you mention to convert the model to PyTorch; then this PyTorch checkpoint can be seamlessly loaded in a TensorFlow implementation, see comment above: https://github.com/huggingface/transformers/issues/3931#issuecomment-618629373\r\n\r\nAfter that you can just do \r\n\r\n```py\r\nmodel.save_pretrained(\"here\")\r\n```\r\n\r\nand you should have a `tf_model.h5` under `here`."
] | 1,587 | 1,619 | 1,589 | CONTRIBUTOR | null | Hi, there might be something I am doing wrong, but I cannot figure out what that is, then any help would be welcome.
After downloading a TF checkpoint (containing model.index, model.data, model.meta, config.json and vocab.txt files), I used it to perform pretraining using some more text, more relevant to the task I would have ahead. Pretraining was performed using the API from BERT's official github. This generated other model.index, model.data and model.meta files. I am now trying to load them into the BertForSequenceClassification module, using the `from_pretrained` method. I figured I should include a config instance as well, so I used that config.json file that was attached to that very first one TF checkpoint I mentioned. But when passing the index file to `from_pretrained`, I get the error:
`AttributeError: 'BertForSequenceClassification' object has no attribute 'bias'`
Any help would be much appreciated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3931/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3930/comments | https://api.github.com/repos/huggingface/transformers/issues/3930/events | https://github.com/huggingface/transformers/issues/3930 | 605,739,018 | MDU6SXNzdWU2MDU3MzkwMTg= | 3,930 | How can i finetune an encode-decoder combination? | {
"login": "Palipoor",
"id": 16380397,
"node_id": "MDQ6VXNlcjE2MzgwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Palipoor",
"html_url": "https://github.com/Palipoor",
"followers_url": "https://api.github.com/users/Palipoor/followers",
"following_url": "https://api.github.com/users/Palipoor/following{/other_user}",
"gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions",
"organizations_url": "https://api.github.com/users/Palipoor/orgs",
"repos_url": "https://api.github.com/users/Palipoor/repos",
"events_url": "https://api.github.com/users/Palipoor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Palipoor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Well I found the summarization example, I'm going to read it and I'll close this issue if it helps. Would still be happy if you can help.",
"@Palipoor Can you please share the summarization example. I am looking for one.",
"> @Palipoor Can you please share the summarization example. I am looking for one.\r\n\r\nHere it is:\r\nhttps://github.com/huggingface/transformers/tree/master/examples/summarization\r\n\r\nAlthough it didn't help me and I'm trying to finetune a T5.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,593 | 1,593 | NONE | null | Hi, I want to finetune an encode-decoder model to train on a parallel dataset(something like translation) and I'm not sure what should I do. I read [this](https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8) blog post but It didn't help. It doesn't really matter for me which encode or decoder I choose, I just need the power of a pre-trained transformer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3929/comments | https://api.github.com/repos/huggingface/transformers/issues/3929/events | https://github.com/huggingface/transformers/pull/3929 | 605,733,236 | MDExOlB1bGxSZXF1ZXN0NDA4MDgxMTE4 | 3,929 | Add support for LayerNorm before residual connection in transformer | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Because I just stumbled upon it: a counterpoint in [Understanding the Difficulty of Training Transformers](https://arxiv.org/abs/2004.08249), where the author founds that while this tends to stabilize the convergence, it also results in worse performances for the runs that do converge.\r\n\r\n(Still, nice PR, I was toying with the idea of doing it too :-)",
"> Because I just stumbled upon it: a counterpoint in [Understanding the Difficulty of Training Transformers](https://arxiv.org/abs/2004.08249), where the author founds that while this tends to stabilize the convergence, it also results in worse performances for the runs that do converge.\r\n> \r\n> (Still, nice PR, I was toying with the idea of doing it too :-)\r\n\r\nHi, author here (On Layer Normalization in the Transformer Architecture),\r\n\r\nFor Pre-LN, a slightly larger dropout rate is usually needed for median scale tasks, such as translation. This is not required for BERT. Pre-LN has larger expressiveness power than Post-LN and when the data scale is small, it is more likely to overfit. \r\n\r\nWhen you use a larger dropout in translation tasks, Pre-LN is no worse than Post-LN. [There are three 'dropout' places in Transformer, For translation tasks, we apply 0.1 to all the three places which make Pre-LN faster to converge and achieve better/competitive performance]. \r\n\r\nYou can reproduce the experiments in our paper.\r\n\r\n",
"Oh, thanks for chiming in, I must read the paper again then!",
"Bumping this discussion. Any thoughts @LysandreJik @julien-c ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Not stale",
"> Not stale\r\n\r\nI think the pre-ln version should be added to the branch. As far as I know, more and more works are developed based on this architecture. BTW, although our paper is rejected by ICLR, it is accepted by ICML:)",
"Bumping this, any thoughts @thomwolf ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Not stale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,587 | 1,619 | 1,619 | CONTRIBUTOR | null | Recent work has shown that placing the layer norm before the residual connection in a transformer leads to better gradient propagation and more stable training. For example, [Learning Deep Transformer Models for Machine Translation](https://arxiv.org/abs/1906.01787) shows that when stacking more layers, the Pre-LN transformer performs better than the Post-LN transformer. A more detailed discussion is found in [On Layer Normalization in the Transformer Architecture](https://openreview.net/pdf?id=B1x8anVFPr), which aids understanding despite being rejected from ICLR 2020 for lack of novelty.
_In my own experiments pretraining from scratch with TFAlbert, I have found pre-layer normalization to have greater training stability and improved performance than the default post-layer normalization._
<img width="331" alt="Screen Shot 2020-04-23 at 11 37 36 AM" src="https://user-images.githubusercontent.com/4564897/80131074-e8f0d380-8556-11ea-9459-0d2c255c4428.png"> <img width="280" alt="Screen Shot 2020-04-23 at 11 41 05 AM" src="https://user-images.githubusercontent.com/4564897/80131507-95cb5080-8557-11ea-8fcf-7757c15ab604.png">
**This is not a final PR, just opening the discussion. Some caveats:**
- If approved, I can extend this to all transformer models, not just ALBERT.
- Loading an existing pretrained model with `config.pre_layer_norm = True` leads to poor performance. A separate pretrained model would need to be made available with this configuration option. I would be happy to do so for ALBERT, but this would be outside my pay grade for all the models. Since the current focus of the repo is on finetuning rather than pretraining, I understand if this isn't a complexity tradeoff you want to make.
What do you think about adding this configuration option?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3929/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3929",
"html_url": "https://github.com/huggingface/transformers/pull/3929",
"diff_url": "https://github.com/huggingface/transformers/pull/3929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3929.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3928/comments | https://api.github.com/repos/huggingface/transformers/issues/3928/events | https://github.com/huggingface/transformers/pull/3928 | 605,705,066 | MDExOlB1bGxSZXF1ZXN0NDA4MDU3Nzkx | 3,928 | Fix TFAlbertForSequenceClassification classifier dropout probability.… | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | … It was set to config.hidden_dropout_prob, but should be config.classifier_dropout_prob. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3928",
"html_url": "https://github.com/huggingface/transformers/pull/3928",
"diff_url": "https://github.com/huggingface/transformers/pull/3928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3928.patch",
"merged_at": 1587662296000
} |
https://api.github.com/repos/huggingface/transformers/issues/3927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3927/comments | https://api.github.com/repos/huggingface/transformers/issues/3927/events | https://github.com/huggingface/transformers/issues/3927 | 605,678,028 | MDU6SXNzdWU2MDU2NzgwMjg= | 3,927 | ❓ DistilBert test perplexity based on WikiText-2: ppl is too low? | {
"login": "bing0037",
"id": 11786011,
"node_id": "MDQ6VXNlcjExNzg2MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11786011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bing0037",
"html_url": "https://github.com/bing0037",
"followers_url": "https://api.github.com/users/bing0037/followers",
"following_url": "https://api.github.com/users/bing0037/following{/other_user}",
"gists_url": "https://api.github.com/users/bing0037/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bing0037/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bing0037/subscriptions",
"organizations_url": "https://api.github.com/users/bing0037/orgs",
"repos_url": "https://api.github.com/users/bing0037/repos",
"events_url": "https://api.github.com/users/bing0037/events{/privacy}",
"received_events_url": "https://api.github.com/users/bing0037/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@VictorSanh can correct me if I'm wrong, but in general, the perplexity for masked language models like BERT is much lower than the perplexity for causal language models. \r\n\r\nThat's because the \"MLM perplexity\" isn't the actual perplexity (in the sense that it is the probability of a sentence, computed from the predicted next word), but rather the \"masked perplexity\" which is computed differently.\r\n\r\nIt isn't surprising to me that you obtain ~6 perplexity on WikitText-2 with DistilBERT.",
"Thank you!"
] | 1,587 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I tried to use the pre-trained model to fine-tune. I got the ppl 6.12 of the DistilBert model, which is much lower than the GPT-2 model (ppl: 18.34, ref: https://paperswithcode.com/sota/language-modelling-on-wikitext-2).
Is the DistilBert model works much better than the GPT-2? Or is it just because the loss functions are different?
Here are the commands:
```
# 1) code:
git clone https://github.com/huggingface/transformers.git
# 2) Download dataset:
cd transformers/examples/
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip
unzip wikitext-2-raw-v1.zip
# 3) Benchmart:
## distilbert:
export TRAIN_FILE=./wikitext-2-raw/wiki.train.raw
export TEST_FILE=./wikitext-2-raw/wiki.test.raw
CUDA_VISIBLE_DEVICES=6 python run_language_modeling.py \
--output_dir=output_distilbert \
--model_type=distilbert \
--model_name_or_path=distilbert-base-uncased \
--do_train \
--per_gpu_train_batch_size 15 \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
```
Result:
```
04/22/2020 13:25:33 - INFO - __main__ - ***** Running evaluation *****
04/22/2020 13:25:33 - INFO - __main__ - Num examples = 535
04/22/2020 13:25:33 - INFO - __main__ - Batch size = 4
Evaluating: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 134/134 [00:05<00:00, 24.02it/s]
04/22/2020 13:25:38 - INFO - __main__ - ***** Eval results *****
04/22/2020 13:25:38 - INFO - __main__ - perplexity = tensor(6.1200)
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3926/comments | https://api.github.com/repos/huggingface/transformers/issues/3926/events | https://github.com/huggingface/transformers/pull/3926 | 605,620,985 | MDExOlB1bGxSZXF1ZXN0NDA3OTg5MjQ2 | 3,926 | Remove 50k limits bug | {
"login": "peterandluc",
"id": 11997351,
"node_id": "MDQ6VXNlcjExOTk3MzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/11997351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peterandluc",
"html_url": "https://github.com/peterandluc",
"followers_url": "https://api.github.com/users/peterandluc/followers",
"following_url": "https://api.github.com/users/peterandluc/following{/other_user}",
"gists_url": "https://api.github.com/users/peterandluc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peterandluc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peterandluc/subscriptions",
"organizations_url": "https://api.github.com/users/peterandluc/orgs",
"repos_url": "https://api.github.com/users/peterandluc/repos",
"events_url": "https://api.github.com/users/peterandluc/events{/privacy}",
"received_events_url": "https://api.github.com/users/peterandluc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | This is a bug needs to be removed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3926/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3926",
"html_url": "https://github.com/huggingface/transformers/pull/3926",
"diff_url": "https://github.com/huggingface/transformers/pull/3926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3926.patch",
"merged_at": 1587654910000
} |
https://api.github.com/repos/huggingface/transformers/issues/3925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3925/comments | https://api.github.com/repos/huggingface/transformers/issues/3925/events | https://github.com/huggingface/transformers/issues/3925 | 605,610,764 | MDU6SXNzdWU2MDU2MTA3NjQ= | 3,925 | Using the default trainer args | {
"login": "williamFalcon",
"id": 3640001,
"node_id": "MDQ6VXNlcjM2NDAwMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3640001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamFalcon",
"html_url": "https://github.com/williamFalcon",
"followers_url": "https://api.github.com/users/williamFalcon/followers",
"following_url": "https://api.github.com/users/williamFalcon/following{/other_user}",
"gists_url": "https://api.github.com/users/williamFalcon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamFalcon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamFalcon/subscriptions",
"organizations_url": "https://api.github.com/users/williamFalcon/orgs",
"repos_url": "https://api.github.com/users/williamFalcon/repos",
"events_url": "https://api.github.com/users/williamFalcon/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamFalcon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This sounds good. We originally were trying to reimplement exactly the other examples. But now we can have the lightning examples support a wider range of options.",
"@srush I can take care of this when I go to work on #4494 if that's alright with you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,587 | 1,595 | 1,595 | CONTRIBUTOR | null | Hey guys!
Just looked through the transformer_base... why not also allow any of the trainer args to be used?
I imagine the best set up for you guys is:
- add default trainer args (since people can look into the lightning docs for details on this)
- add hf specific args (nlp related or whatever you guys need)
- add model specific args
But instead it looks like you only use a subset of the args
https://github.com/huggingface/transformers/blob/dd9d483d03962fea127f59661f3ae6156e7a91d2/examples/transformer_base.py#L275
Something like this should work:
```python
parser = ArgumentParser()
# enable all trainer args
parser = Trainer.add_argparse_args(parser)
# add the HF args
parser.add_argument(--some_hf_specific_thing, ...)
# add the model args
parser = GoodGAN.add_model_specific_args(parser)
# cook them all up :)
args = parser.parse_args()
```
Check here for mode details:
https://pytorch-lightning.readthedocs.io/en/stable/hyperparameters.html#multiple-lightning-modules
@srush @nateraw | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3925/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3924/comments | https://api.github.com/repos/huggingface/transformers/issues/3924/events | https://github.com/huggingface/transformers/pull/3924 | 605,574,636 | MDExOlB1bGxSZXF1ZXN0NDA3OTUyMDA4 | 3,924 | Change uses of pow(x, 3) to pow(x, 3.0) to resolve #3873 | {
"login": "mneilly-et",
"id": 55827703,
"node_id": "MDQ6VXNlcjU1ODI3NzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/55827703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mneilly-et",
"html_url": "https://github.com/mneilly-et",
"followers_url": "https://api.github.com/users/mneilly-et/followers",
"following_url": "https://api.github.com/users/mneilly-et/following{/other_user}",
"gists_url": "https://api.github.com/users/mneilly-et/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mneilly-et/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mneilly-et/subscriptions",
"organizations_url": "https://api.github.com/users/mneilly-et/orgs",
"repos_url": "https://api.github.com/users/mneilly-et/repos",
"events_url": "https://api.github.com/users/mneilly-et/events{/privacy}",
"received_events_url": "https://api.github.com/users/mneilly-et/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | This minor pull request fixes #3873 by changing the type of the exponent parameter for the _torch.pow()_ call in _gelu_new()_ from integer to float.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3924",
"html_url": "https://github.com/huggingface/transformers/pull/3924",
"diff_url": "https://github.com/huggingface/transformers/pull/3924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3924.patch",
"merged_at": 1587666331000
} |
https://api.github.com/repos/huggingface/transformers/issues/3923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3923/comments | https://api.github.com/repos/huggingface/transformers/issues/3923/events | https://github.com/huggingface/transformers/pull/3923 | 605,548,630 | MDExOlB1bGxSZXF1ZXN0NDA3OTMwODQz | 3,923 | Feat/add model card | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks! [Model page](https://huggingface.co/lvwerra/gpt2-imdb-ctrl)"
] | 1,587 | 1,587 | 1,587 | MEMBER | null | Add model card for sentiment control model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3923/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3923",
"html_url": "https://github.com/huggingface/transformers/pull/3923",
"diff_url": "https://github.com/huggingface/transformers/pull/3923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3923.patch",
"merged_at": 1587738269000
} |
https://api.github.com/repos/huggingface/transformers/issues/3922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3922/comments | https://api.github.com/repos/huggingface/transformers/issues/3922/events | https://github.com/huggingface/transformers/issues/3922 | 605,508,819 | MDU6SXNzdWU2MDU1MDg4MTk= | 3,922 | LineByLineTextDataset limits the total number of examples to 50000 documents | {
"login": "questpavan",
"id": 63842917,
"node_id": "MDQ6VXNlcjYzODQyOTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/63842917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questpavan",
"html_url": "https://github.com/questpavan",
"followers_url": "https://api.github.com/users/questpavan/followers",
"following_url": "https://api.github.com/users/questpavan/following{/other_user}",
"gists_url": "https://api.github.com/users/questpavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/questpavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/questpavan/subscriptions",
"organizations_url": "https://api.github.com/users/questpavan/orgs",
"repos_url": "https://api.github.com/users/questpavan/repos",
"events_url": "https://api.github.com/users/questpavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/questpavan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's a bug. Do you want to submit a PR to remove?",
"Yes that would be great",
"Closed by #3926"
] | 1,587 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
While running run_language_modeling.py with argument --line_by_line, found out that it only takes first 50000 documents.
https://github.com/huggingface/transformers/blob/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d/src/transformers/data/datasets/language_modeling.py#L93
Please confirm is this a bug or intended behavior?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3922/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3921/comments | https://api.github.com/repos/huggingface/transformers/issues/3921/events | https://github.com/huggingface/transformers/issues/3921 | 605,482,033 | MDU6SXNzdWU2MDU0ODIwMzM= | 3,921 | New run_language_modeling.py does not save vocab.txt and tokenizer_config.json | {
"login": "questpavan",
"id": 63842917,
"node_id": "MDQ6VXNlcjYzODQyOTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/63842917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questpavan",
"html_url": "https://github.com/questpavan",
"followers_url": "https://api.github.com/users/questpavan/followers",
"following_url": "https://api.github.com/users/questpavan/following{/other_user}",
"gists_url": "https://api.github.com/users/questpavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/questpavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/questpavan/subscriptions",
"organizations_url": "https://api.github.com/users/questpavan/orgs",
"repos_url": "https://api.github.com/users/questpavan/repos",
"events_url": "https://api.github.com/users/questpavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/questpavan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, you would now need to do it manually by calling `tokenizer.save_pretrained(training_args.output_dir)`\r\n\r\nI'll keep this issue open to see if many people ask for it (we can re-add if it's the case). cc @LysandreJik ",
"I think if an extra step is needed, it should say it in the guideline ;) \r\nSo perhaps auto save would be better.",
"Ok, I will re-add it by default"
] | 1,587 | 1,594 | 1,587 | NONE | null | # 🐛 Bug
Hi,
While running new run_language_modeling.py, at the end it only saves pytorch model and config.json and training arguments
Unlike before it also used to store vocab.txt and tokenizer_config.json
I am not using any custom tokenizer, only using tokenizer provided by the model.
## Command Details I used
python run_language_modeling.py
--output_dir=bert-base-uncased-4
--overwrite_output_dir
--model_type=bert
--model_name_or_path="bert-base-uncased"
--do_train --train_data_file="./test.txt"
--per_gpu_train_batch_size 3
--mlm
--num_train_epochs 1
Model I am using (Bert, XLNet ...):
Bert-base-uncased
Language I am using the model on (English, Chinese ...):
Engilsh
## Expected behavior
It should also save vocab.txt and tokenizer_config.json
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3921/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3921/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3920/comments | https://api.github.com/repos/huggingface/transformers/issues/3920/events | https://github.com/huggingface/transformers/issues/3920 | 605,476,186 | MDU6SXNzdWU2MDU0NzYxODY= | 3,920 | run_language_modeling.py line 251: checking if it is a directory | {
"login": "jihwangk",
"id": 45372212,
"node_id": "MDQ6VXNlcjQ1MzcyMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/45372212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jihwangk",
"html_url": "https://github.com/jihwangk",
"followers_url": "https://api.github.com/users/jihwangk/followers",
"following_url": "https://api.github.com/users/jihwangk/following{/other_user}",
"gists_url": "https://api.github.com/users/jihwangk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jihwangk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jihwangk/subscriptions",
"organizations_url": "https://api.github.com/users/jihwangk/orgs",
"repos_url": "https://api.github.com/users/jihwangk/repos",
"events_url": "https://api.github.com/users/jihwangk/events{/privacy}",
"received_events_url": "https://api.github.com/users/jihwangk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also faced the similler issue while the arguments \r\n--save_steps 200\r\n--save_total_limit 3 \r\nRemoving this arguments worked. But need to know what could be the issue",
"Did removing them worked through the end completely? Because there is a default value for save_steps, and I just set it to 1 to see if the bug still appears after some manual fixes... Looking at the function names like rotate_checkpoints(), I feel like save_total_limit might be the key, since its default is set to None, but again not sure....",
"Yes Removing them worked till the end for me. Total Optimization steps for me was around 1600. so it did save 3 times the checkpoints and correctly loaded them as well. ",
"Ah okay removing the --save_total_limit make it work for me, but then it would start saving gigabytes of models without top limit. I think the directory checking might come from removing the should_continue argument, but will require us to constantly update where we load the latest model from...?",
"I can confirm this is a bug. I'll push a fix on master shortly."
] | 1,587 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi, I'm curious why you changed the code to check if model_args.model_name_or_path is a directory. I am trying to train OpenAI GPT in Google Colab, and using arguments as following:
!python /content/transformers/examples/run_language_modeling.py
--output_dir="/content/gdrive/My Drive/output"
--overwrite_output_dir
--model_type=openai-gpt
--model_name_or_path=openai-gpt
--do_train
--train_data_file="/content/gdrive/My Drive/train.txt"
--do_eval
--eval_data_file="/content/gdrive/My Drive/dev.txt"
--num_train_epochs 5
--learning_rate 1.5e-4
--save_steps 1
--save_total_limit 5
but this line causes following error:
Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 283, in
main()
File "/content/transformers/examples/run_language_modeling.py", line 257, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 363, in train
self._rotate_checkpoints()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 458, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 443, in _sorted_checkpoints
regex_match = re.match(".*{}-([0-9]+)".format(checkpoint_prefix), path)
File "/usr/lib/python3.6/re.py", line 172, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or bytes-like object
It seems that openai-gpt is not a directory, thus making model_path variable equal to None. Is this due to my implementation or can this part be fixed?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3919/comments | https://api.github.com/repos/huggingface/transformers/issues/3919/events | https://github.com/huggingface/transformers/issues/3919 | 605,470,867 | MDU6SXNzdWU2MDU0NzA4Njc= | 3,919 | xlm-roberta (large/base) : run_language_modeling.py cannot starting training | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have you tried launching a debugger to see exactly what takes a long time?\r\n\r\nI would use vscode remote debugging.",
"I would guess that your tokenization process takes too long. If you're training a new LM from scratch, I would recommend using the fast [Tokenizers library](https://github.com/huggingface/tokenizers) written in Rust. You can initialize a new `ByteLevelBPETokenizer` instance in your `LineByLineTextDataset` class and `encode_batch` your text with it.",
"Thanks you guys, I finally managed to finetune XLM-Roberta-Large, but have to wait for 11 hours, before the training start! \r\n\r\nSince I did not want training from scratch, I took a tip from @mfilipav to convert pretrained tokenizer to fast-tokenizer (and since it's SentencePiece, I have to use` sentencepiece_extractor.py` ), and modify `use_fast = True` in `run_language_modeling.py` ... However, since it's still 11 hours of waiting, maybe this doesn't help.\r\n\r\n**UPDATED** : By adding `--line_by_line` option, the training start very quickly, close the issue!",
"@ratthachat and how fast it became after enabling \"--line_by_line true\" ? I am waiting for almost 1 hour. My training set size is 11 gb and here goes my parameters\r\n`export TRAIN_FILE=/hdd/sifat/NLP/intent_classification/bert_train.txt\r\nexport TEST_FILE=/hdd/sifat/NLP/intent_classification/data_corpus/test.txt\r\n\r\npython examples/run_language_modeling.py \\\r\n --output_dir ./bert_output \\\r\n --model_type=bert \\\r\n --model_name_or_path=bert-base-multilingual-cased \\\r\n --mlm \\\r\n --line_by_line true \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --learning_rate 1e-4 \\\r\n --num_train_epochs 3 \\\r\n --save_total_limit 2 \\\r\n --save_steps 2000 \\\r\n --per_gpu_train_batch_size 5 \\\r\n --evaluate_during_training \\\r\n --seed 42`",
"Zaowad, your training file is much bigger than mine so I guess 1 hour is not bad ;) You can also try fp16 option as well"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Hi HuggingFace, thank you very much for your great contribution.
# ❓ Questions & Help
My problem is : run_language_modeling.py takes abnormally long time for `xlm-roberta-large & base` **_"before" start training_**
. It got stuck at the following step for 7 hours (so I gave up eventually) :
`transformers.data.datasets.language_modeling - Creating features from dataset file at ./`
I have successfully running `gpt2-large`, `distilbert-base-multilingual-cased` using exactly the same command below (just change model) which start training within just 2-3 minutes. At first I thought that because of the big size of XLM-Roberta. However, as `gpt2-large` has similar size, is there somehow problem on finetuning with XLM-Roberta? (So maybe a bug in the current version)
I also tried to rerun the same command in another machine, but got the same stuck (which is not the case for `gpt2-large`, `distilbert-base-multilingual-cased` )
**update** the same thing happen to `xlm-roberta-base`
## Command Details I used
Machine AWS p3.2xlarge (V100, 64GB Ram)
Training file size is around 60MB
!python transformers/examples/run_language_modeling.py \
--model_type=xlm-roberta \
--model_name_or_path=xlm-roberta-large \
--do_train \
--mlm \
--per_gpu_train_batch_size=1 \
--gradient_accumulation_steps=8 \
--train_data_file={TRAIN_FILE} \
--num_train_epochs=2 \
--block_size=225 \
--output_dir=output_lm \
--save_total_limit=1 \
--save_steps=10000 \
--cache_dir=output_lm \
--overwrite_cache \
--overwrite_output_dir | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3918/comments | https://api.github.com/repos/huggingface/transformers/issues/3918/events | https://github.com/huggingface/transformers/pull/3918 | 605,343,657 | MDExOlB1bGxSZXF1ZXN0NDA3NzYzMTU5 | 3,918 | change scheduler.get_last_lr to get_lr to avoid bug | {
"login": "TobiasLee",
"id": 20009381,
"node_id": "MDQ6VXNlcjIwMDA5Mzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasLee",
"html_url": "https://github.com/TobiasLee",
"followers_url": "https://api.github.com/users/TobiasLee/followers",
"following_url": "https://api.github.com/users/TobiasLee/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasLee/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasLee/orgs",
"repos_url": "https://api.github.com/users/TobiasLee/repos",
"events_url": "https://api.github.com/users/TobiasLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=h1) Report\n> Merging [#3918](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3918 +/- ##\n==========================================\n- Coverage 78.45% 78.45% -0.01% \n==========================================\n Files 111 111 \n Lines 18521 18521 \n==========================================\n- Hits 14531 14530 -1 \n- Misses 3990 3991 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.59% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=footer). Last update [cb3c221...f42b609](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I don’t think that’s right. Which version of PyTorch are you running?",
"> I don’t think that’s right. Which version of PyTorch are you running?\r\n\r\nWell, my version is 1.2.0, I checked the API in 1.4.0, the current version is right. \r\nSorry for my hurry PR,"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | Learning rate scheduler function for getting last lr is `get_lr` instead of `get_last_lr`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3918",
"html_url": "https://github.com/huggingface/transformers/pull/3918",
"diff_url": "https://github.com/huggingface/transformers/pull/3918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3918.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3917/comments | https://api.github.com/repos/huggingface/transformers/issues/3917/events | https://github.com/huggingface/transformers/pull/3917 | 605,292,178 | MDExOlB1bGxSZXF1ZXN0NDA3NzIxOTgy | 3,917 | Create README.md | {
"login": "YuvalPeleg",
"id": 42371886,
"node_id": "MDQ6VXNlcjQyMzcxODg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42371886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuvalPeleg",
"html_url": "https://github.com/YuvalPeleg",
"followers_url": "https://api.github.com/users/YuvalPeleg/followers",
"following_url": "https://api.github.com/users/YuvalPeleg/following{/other_user}",
"gists_url": "https://api.github.com/users/YuvalPeleg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuvalPeleg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuvalPeleg/subscriptions",
"organizations_url": "https://api.github.com/users/YuvalPeleg/orgs",
"repos_url": "https://api.github.com/users/YuvalPeleg/repos",
"events_url": "https://api.github.com/users/YuvalPeleg/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuvalPeleg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks! [Model page](https://huggingface.co/SparkBeyond/roberta-large-sts-b)",
"PS: you should upload an avatar for the SparkBeyond org here: https://huggingface.co/SparkBeyond"
] | 1,587 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3917",
"html_url": "https://github.com/huggingface/transformers/pull/3917",
"diff_url": "https://github.com/huggingface/transformers/pull/3917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3917.patch",
"merged_at": 1587738241000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3916/comments | https://api.github.com/repos/huggingface/transformers/issues/3916/events | https://github.com/huggingface/transformers/pull/3916 | 605,271,013 | MDExOlB1bGxSZXF1ZXN0NDA3NzA1OTU5 | 3,916 | feat: add logging through Weights & Biases | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=h1) Report\n> Merging [#3916](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `38.46%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3916 +/- ##\n==========================================\n- Coverage 78.45% 78.43% -0.02% \n==========================================\n Files 111 111 \n Lines 18521 18533 +12 \n==========================================\n+ Hits 14531 14537 +6 \n- Misses 3990 3996 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.52% <38.46%> (-0.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=footer). Last update [cb3c221...1330605](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi,\r\nI'm just checking if there's anything you would like me to change on this PR.\r\nIdeally I would also like to be able to pass a few extra optional kwargs (saving model, project name…) but the integration can go as deep as you want based on the project philosophy.\r\n\r\nAlso feel free if you want me to test other use cases for this integration. I want to write a post with the W&B guys after that and compare models, so I'd be glad to do it on something you find most useful!\r\n\r\nThe idea is mainly to give more traceability to models and a convenient way to compare and optimize models.",
"Will take a look very soon @borisdayma! (can i ping you on the W&B Slack or is there a better channel?)",
"Perfect, you can ping me on the W&B slack!",
"That's great @julien-c !\r\nThe next big piece that could be useful is to log the trained model weights on W&B.\r\nI am not sure it's a good idea to do it automatically. Would there be an easy way to add an arg for it `save_model`?\r\nOtherwise if you think it makes more sense (like for `watch` method), people can call `wandb.save` at the end of their script separately.\r\n\r\nI'll suggest to add it in wandb docs but do you think there would be a good place to document how to use `wandb` within hugging-face docs?",
"I'll add some doc about wandb to our examples' README.md",
"Awesome, let me know if I can be of any further help!"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | Hi,
I've been using hugging-face for a while and I've been loving it. I think it could become a central reference for NLP problems and the new `Trainer` class made it so much easier to add new functionality.
I would like to contribute in solving a few limitations I observed:
* some models don't have clear details on how they have been trained (hyper-parameters) and for which task, making it hard to reproduce their results (and even ensure those results were actually reached) -> ideally we would track & log their training process as well as config parameters used
* some new models keep on being requested/proposed without seeing their actual efficiency -> ideally we would be able to compare them to baselines
* I've seen issues where "better" models are offered to replace previous ones but there is no way to prove they are actually better -> maybe we can add a metrics table (linked to actual runs) to the model card
* it is hard to keep track of models and datasets
* overall I'd like to bring more traceability and transparency between users
This is why I propose to use [Weights & Biases](https://docs.wandb.com/) which is free for open source and will help solve those issues. Also it is already part of Pytorch-Lightning so it will be easy to integrate it with those scripts, if extra setup is required.
This PR brings :
* logging through Weights & Biases when `wandb` is installed
* logging of evaluation metrics at the end of training
See example run on distilbert-base-cased -> [W&B run](https://app.wandb.ai/borisd13/transformers-examples/runs/3iuvnwam?workspace=user-borisd13)

If you navigate around, you will see that config parameters, metrics, gradients, computer resources, actual code (including git repo & command line used) and stdout have all been uploaded for full traceability and reproducibility. I can also easily upload trained model but maybe we could add an arg to make it optional in case users have limited connection (as you prefer)?
I can also easily compare bert-base-uncased and distilbert-base-uncased from my [project page](https://app.wandb.ai/borisd13/transformers-examples).
*Note: I didn't try any fine-tuning and just used same parameters on both in this example.*

This makes it easy to quickly see if a new model is actually interesting.
Finally, this integration let us create [cool reports](https://app.wandb.ai/stacey/keras_finetune/reports/Curriculum-Learning-in-Nature--Vmlldzo1MjcxNw) where graphs directly pool data and are interactive.
If you like it, then I'd love to add other features in later PR's:
* log trained model -> I'll just need to have access to path of relevant files so that I don't upload entire output directory
* add [W&B init args](https://docs.wandb.com/library/init) as parameters based on what can be useful for hugging-face
* log all command line args -> I need to access the parser so that I can also log parameters such as `model_path_or_name` and `max_seq_length` (for run_glue script) which will be convenient to create custom summary tables and compare models (or finetune)
* maybe log called script (unless `finetuning_taks` is enough?)
* in addition to metrics, track model size & inference speed which can help users choose the model they want
* use artifacts (W&B version control system) to track models, tokenizers & datasets -> central repository that will let us see easily that runs are based on same dataset or same pre-trained model + see all trained tasks associated to one pre-trained model
* log prediction samples (based on the task)
* create a central repo on W&B (entity: hugging-face) to have access to other's people runs
* integrate model card issue template so that we always add a link to a run as well as performance metrics
* add [sweeps](https://docs.wandb.com/sweeps) -> make it easy to optimize hyper-parameters
* we could add a hook to run automatically on all tasks any new model added (and link the run to the model cards)
Please let me know your thoughts and feel free to contact me or Weights & Biases directly to discuss this integration. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3916/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3916/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3916",
"html_url": "https://github.com/huggingface/transformers/pull/3916",
"diff_url": "https://github.com/huggingface/transformers/pull/3916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3916.patch",
"merged_at": 1588646548000
} |
https://api.github.com/repos/huggingface/transformers/issues/3915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3915/comments | https://api.github.com/repos/huggingface/transformers/issues/3915/events | https://github.com/huggingface/transformers/issues/3915 | 605,262,363 | MDU6SXNzdWU2MDUyNjIzNjM= | 3,915 | New run_language_modeling.py continuing trainng | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, we removed the `--should_continue` flag. You can just use your latest checkpoint as `--model_name_or_path`\r\n\r\nLet us know if this helps",
"Yup, The new script is really clean and understandable. To continue training, I should use `--overwrite_output_dir`? Because, when I point to last checkpoint directory using `-- model_name_or_path`, It asks me for `--output_dir` which must be empty, so I had to switch between two directories. ",
"Yes you would use `--overwrite_output_dir` (it was implicitly added before)\r\n\r\nYou would do:\r\n```\r\n--model_name_or_path ./model_name/checkpoint-9000\r\n--output_dir ./model_name\r\n--overwrite_output_dir\r\n```\r\n\r\nIf this is a frequent request we could add it back but I feel like it was more confusing than anything.",
"Yes would like to get this feature back. ",
"I am using [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) but couldn't continue training from last checkpoint. It is starting from step 0. Any suggestion?\r\n\r\n```\r\ntraining_args = TrainingArguments(\r\n output_dir=last_checkpoint,\r\n overwrite_output_dir=True,\r\n...\r\n)\r\n trainer = Trainer( model=model, #roberta config model\r\n args=training_args,\r\n...\r\n)\r\n```\r\n\r\ne.g., `last_checkpoint=\"/saved_model/checkpoint-20000/\"`\r\n@julien-c "
] | 1,587 | 1,600 | 1,587 | CONTRIBUTOR | null | I tried using new script for Language Modeling to train model from scratch.
I was training model with old script. When I tried continuing using new one, there is no option for `--should_continue`.
Does `--overwrite_output_dir` makes it train from scratch if i use same directory for `--model_name_or_path` and `--output_dir`?
So, I am continuing training with old script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3914/comments | https://api.github.com/repos/huggingface/transformers/issues/3914/events | https://github.com/huggingface/transformers/issues/3914 | 605,254,015 | MDU6SXNzdWU2MDUyNTQwMTU= | 3,914 | Unknown Device when training GPT2 with TPUs in Colab | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed when building from source"
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DialoGPT2-small from microsoft
Language I am using the model on (English, Chinese ...): Spanish Conversations
The problem arises when using: Pytorch's XLA library for trying to train a GPT2 model on google colab TPUS
* [ ] the official example scripts: (give details below)
* [ x ] my own modified scripts: (give details below)
I adapted a TPU training script for the RoBERTa model to attempt to work with a GPT2 model
https://colab.research.google.com/drive/1LTH0LpHxWQYEy9U7vBWTk4-4sLo7YF5B
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: Multi Turn Dialog
## To reproduce
Steps to reproduce the behavior:
1. Run the following Colab Notebook: https://colab.research.google.com/drive/1LTH0LpHxWQYEy9U7vBWTk4-4sLo7YF5B
2. Make sure the Runtime is set to be a TPU
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Exception in device=TPU:7: Unknown device
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 484, in forward
hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-17-4d2a1ccbaa5f>", line 3, in _mp_fn
a = run(trn_df, val_df, model, tokenizer, args)
File "<ipython-input-16-d526a8f464d8>", line 81, in run
scheduler
File "<ipython-input-11-be3e41b46d25>", line 10, in train_fn
outputs = model(inputs, labels = labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 599, in forward
inputs_embeds=inputs_embeds,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 484, in forward
hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 231, in forward
m = self.mlp(self.ln_2(x))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 210, in forward
h = self.act(self.c_fc(x))
RuntimeError: Unknown device
```
## Expected behavior
GPT2 model to start training leveraging the Google Colab TPU
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+d6149a7 (False)
- Tensorflow version (GPU?): 2.2.0-rc3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
- Pytorch XLA Version: torch-xla-1.6+e788e5b
Any help is greatly appreciative and thanks for the amazing library! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3914/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3913/comments | https://api.github.com/repos/huggingface/transformers/issues/3913/events | https://github.com/huggingface/transformers/pull/3913 | 605,253,145 | MDExOlB1bGxSZXF1ZXN0NDA3NjkxOTAw | 3,913 | Summarization | {
"login": "skuma149",
"id": 47931836,
"node_id": "MDQ6VXNlcjQ3OTMxODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/47931836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skuma149",
"html_url": "https://github.com/skuma149",
"followers_url": "https://api.github.com/users/skuma149/followers",
"following_url": "https://api.github.com/users/skuma149/following{/other_user}",
"gists_url": "https://api.github.com/users/skuma149/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skuma149/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skuma149/subscriptions",
"organizations_url": "https://api.github.com/users/skuma149/orgs",
"repos_url": "https://api.github.com/users/skuma149/repos",
"events_url": "https://api.github.com/users/skuma149/events{/privacy}",
"received_events_url": "https://api.github.com/users/skuma149/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587 | 1,587 | 1,587 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3913",
"html_url": "https://github.com/huggingface/transformers/pull/3913",
"diff_url": "https://github.com/huggingface/transformers/pull/3913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3913.patch",
"merged_at": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.